2026-04-06 01:38:18.499218 | Job console starting 2026-04-06 01:38:18.513245 | Updating git repos 2026-04-06 01:38:18.565443 | Cloning repos into workspace 2026-04-06 01:38:18.795968 | Restoring repo states 2026-04-06 01:38:18.816708 | Merging changes 2026-04-06 01:38:18.816730 | Checking out repos 2026-04-06 01:38:19.113220 | Preparing playbooks 2026-04-06 01:38:19.794433 | Running Ansible setup 2026-04-06 01:38:24.225971 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-06 01:38:24.970924 | 2026-04-06 01:38:24.971097 | PLAY [Base pre] 2026-04-06 01:38:24.988525 | 2026-04-06 01:38:24.988664 | TASK [Setup log path fact] 2026-04-06 01:38:25.024846 | orchestrator | ok 2026-04-06 01:38:25.048030 | 2026-04-06 01:38:25.048191 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-06 01:38:25.091719 | orchestrator | ok 2026-04-06 01:38:25.106970 | 2026-04-06 01:38:25.107114 | TASK [emit-job-header : Print job information] 2026-04-06 01:38:25.149297 | # Job Information 2026-04-06 01:38:25.149540 | Ansible Version: 2.16.14 2026-04-06 01:38:25.149590 | Job: testbed-upgrade-stable-ubuntu-24.04 2026-04-06 01:38:25.149636 | Pipeline: periodic-midnight 2026-04-06 01:38:25.149669 | Executor: 521e9411259a 2026-04-06 01:38:25.149698 | Triggered by: https://github.com/osism/testbed 2026-04-06 01:38:25.149728 | Event ID: 8a3b6f1da03e49db83363cd232c114b7 2026-04-06 01:38:25.158749 | 2026-04-06 01:38:25.158917 | LOOP [emit-job-header : Print node information] 2026-04-06 01:38:25.296693 | orchestrator | ok: 2026-04-06 01:38:25.296976 | orchestrator | # Node Information 2026-04-06 01:38:25.297035 | orchestrator | Inventory Hostname: orchestrator 2026-04-06 01:38:25.297078 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-06 01:38:25.297114 | orchestrator | Username: zuul-testbed06 2026-04-06 01:38:25.297149 | orchestrator | Distro: Debian 12.13 2026-04-06 01:38:25.297189 | orchestrator | Provider: static-testbed 2026-04-06 01:38:25.297224 | orchestrator | Region: 2026-04-06 01:38:25.297281 | orchestrator | Label: testbed-orchestrator 2026-04-06 01:38:25.297318 | orchestrator | Product Name: OpenStack Nova 2026-04-06 01:38:25.297350 | orchestrator | Interface IP: 81.163.193.140 2026-04-06 01:38:25.312003 | 2026-04-06 01:38:25.312132 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-06 01:38:25.783431 | orchestrator -> localhost | changed 2026-04-06 01:38:25.799053 | 2026-04-06 01:38:25.799209 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-06 01:38:26.910651 | orchestrator -> localhost | changed 2026-04-06 01:38:26.925670 | 2026-04-06 01:38:26.925853 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-06 01:38:27.222595 | orchestrator -> localhost | ok 2026-04-06 01:38:27.229939 | 2026-04-06 01:38:27.230053 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-06 01:38:27.259133 | orchestrator | ok 2026-04-06 01:38:27.275573 | orchestrator | included: /var/lib/zuul/builds/bd4205f76b04427cb48779fdbca318fd/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-06 01:38:27.283708 | 2026-04-06 01:38:27.283807 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-06 01:38:28.766564 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-06 01:38:28.767062 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/bd4205f76b04427cb48779fdbca318fd/work/bd4205f76b04427cb48779fdbca318fd_id_rsa 2026-04-06 01:38:28.767161 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/bd4205f76b04427cb48779fdbca318fd/work/bd4205f76b04427cb48779fdbca318fd_id_rsa.pub 2026-04-06 01:38:28.767220 | orchestrator -> localhost | The key fingerprint is: 2026-04-06 01:38:28.767308 | orchestrator -> localhost | SHA256:rbRgb9rymTVVa1nGwpLETcd/K0S3iO/GHfGAd3dM01U zuul-build-sshkey 2026-04-06 01:38:28.767361 | orchestrator -> localhost | The key's randomart image is: 2026-04-06 01:38:28.767427 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-06 01:38:28.767475 | orchestrator -> localhost | | ..o..E| 2026-04-06 01:38:28.767523 | orchestrator -> localhost | | ..+.=+| 2026-04-06 01:38:28.767566 | orchestrator -> localhost | | =.*oB| 2026-04-06 01:38:28.767608 | orchestrator -> localhost | | . ..=oOB| 2026-04-06 01:38:28.767652 | orchestrator -> localhost | | o S . +.+oB| 2026-04-06 01:38:28.767699 | orchestrator -> localhost | | . + o . + o.| 2026-04-06 01:38:28.767743 | orchestrator -> localhost | | = o o o .| 2026-04-06 01:38:28.767787 | orchestrator -> localhost | | .+ + . + . | 2026-04-06 01:38:28.767830 | orchestrator -> localhost | | .o= . | 2026-04-06 01:38:28.767873 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-06 01:38:28.767993 | orchestrator -> localhost | ok: Runtime: 0:00:01.000161 2026-04-06 01:38:28.783443 | 2026-04-06 01:38:28.783579 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-06 01:38:28.822530 | orchestrator | ok 2026-04-06 01:38:28.836697 | orchestrator | included: /var/lib/zuul/builds/bd4205f76b04427cb48779fdbca318fd/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-06 01:38:28.846485 | 2026-04-06 01:38:28.846594 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-06 01:38:28.871966 | orchestrator | skipping: Conditional result was False 2026-04-06 01:38:28.881243 | 2026-04-06 01:38:28.881391 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-06 01:38:29.507521 | orchestrator | changed 2026-04-06 01:38:29.518527 | 2026-04-06 01:38:29.518690 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-06 01:38:29.863050 | orchestrator | ok 2026-04-06 01:38:29.872378 | 2026-04-06 01:38:29.872506 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-06 01:38:30.349638 | orchestrator | ok 2026-04-06 01:38:30.358238 | 2026-04-06 01:38:30.358414 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-06 01:38:30.800167 | orchestrator | ok 2026-04-06 01:38:30.808845 | 2026-04-06 01:38:30.808988 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-06 01:38:30.833231 | orchestrator | skipping: Conditional result was False 2026-04-06 01:38:30.845627 | 2026-04-06 01:38:30.845783 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-06 01:38:31.322102 | orchestrator -> localhost | changed 2026-04-06 01:38:31.348021 | 2026-04-06 01:38:31.348341 | TASK [add-build-sshkey : Add back temp key] 2026-04-06 01:38:31.720010 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/bd4205f76b04427cb48779fdbca318fd/work/bd4205f76b04427cb48779fdbca318fd_id_rsa (zuul-build-sshkey) 2026-04-06 01:38:31.720616 | orchestrator -> localhost | ok: Runtime: 0:00:00.019822 2026-04-06 01:38:31.737318 | 2026-04-06 01:38:31.737482 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-06 01:38:32.183805 | orchestrator | ok 2026-04-06 01:38:32.190060 | 2026-04-06 01:38:32.190184 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-06 01:38:32.224289 | orchestrator | skipping: Conditional result was False 2026-04-06 01:38:32.278705 | 2026-04-06 01:38:32.278891 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-06 01:38:32.769117 | orchestrator | ok 2026-04-06 01:38:32.782020 | 2026-04-06 01:38:32.782148 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-06 01:38:32.828996 | orchestrator | ok 2026-04-06 01:38:32.839714 | 2026-04-06 01:38:32.839849 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-06 01:38:33.170457 | orchestrator -> localhost | ok 2026-04-06 01:38:33.184437 | 2026-04-06 01:38:33.184581 | TASK [validate-host : Collect information about the host] 2026-04-06 01:38:34.461361 | orchestrator | ok 2026-04-06 01:38:34.479189 | 2026-04-06 01:38:34.479351 | TASK [validate-host : Sanitize hostname] 2026-04-06 01:38:34.545448 | orchestrator | ok 2026-04-06 01:38:34.553957 | 2026-04-06 01:38:34.554092 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-06 01:38:35.151121 | orchestrator -> localhost | changed 2026-04-06 01:38:35.166168 | 2026-04-06 01:38:35.166403 | TASK [validate-host : Collect information about zuul worker] 2026-04-06 01:38:35.653841 | orchestrator | ok 2026-04-06 01:38:35.663343 | 2026-04-06 01:38:35.663487 | TASK [validate-host : Write out all zuul information for each host] 2026-04-06 01:38:36.247919 | orchestrator -> localhost | changed 2026-04-06 01:38:36.267374 | 2026-04-06 01:38:36.267514 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-06 01:38:36.568400 | orchestrator | ok 2026-04-06 01:38:36.578549 | 2026-04-06 01:38:36.578692 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-06 01:38:59.610960 | orchestrator | changed: 2026-04-06 01:38:59.611186 | orchestrator | .d..t...... src/ 2026-04-06 01:38:59.611223 | orchestrator | .d..t...... src/github.com/ 2026-04-06 01:38:59.611250 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-06 01:38:59.611298 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-06 01:38:59.611321 | orchestrator | RedHat.yml 2026-04-06 01:38:59.626303 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-06 01:38:59.626320 | orchestrator | RedHat.yml 2026-04-06 01:38:59.626373 | orchestrator | = 2.2.0"... 2026-04-06 01:39:10.962417 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-06 01:39:10.981194 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-06 01:39:11.544521 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-06 01:39:12.288146 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-06 01:39:12.549189 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-06 01:39:13.116388 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-06 01:39:13.368215 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-06 01:39:13.801239 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-06 01:39:13.801310 | orchestrator | 2026-04-06 01:39:13.801318 | orchestrator | Providers are signed by their developers. 2026-04-06 01:39:13.801324 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-06 01:39:13.801330 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-06 01:39:13.801346 | orchestrator | 2026-04-06 01:39:13.801352 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-06 01:39:13.801357 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-06 01:39:13.801377 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-06 01:39:13.801382 | orchestrator | you run "tofu init" in the future. 2026-04-06 01:39:13.801641 | orchestrator | 2026-04-06 01:39:13.801651 | orchestrator | OpenTofu has been successfully initialized! 2026-04-06 01:39:13.801707 | orchestrator | 2026-04-06 01:39:13.801714 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-06 01:39:13.801719 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-06 01:39:13.801724 | orchestrator | should now work. 2026-04-06 01:39:13.801729 | orchestrator | 2026-04-06 01:39:13.801734 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-06 01:39:13.801739 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-06 01:39:13.801745 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-06 01:39:14.016177 | orchestrator | Created and switched to workspace "ci"! 2026-04-06 01:39:14.016230 | orchestrator | 2026-04-06 01:39:14.016236 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-06 01:39:14.016241 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-06 01:39:14.016261 | orchestrator | for this configuration. 2026-04-06 01:39:14.180848 | orchestrator | ci.auto.tfvars 2026-04-06 01:39:14.183570 | orchestrator | default_custom.tf 2026-04-06 01:39:15.159640 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-06 01:39:15.735537 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-06 01:39:15.953688 | orchestrator | 2026-04-06 01:39:15.953770 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-06 01:39:15.953780 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-06 01:39:15.953786 | orchestrator | + create 2026-04-06 01:39:15.953793 | orchestrator | <= read (data resources) 2026-04-06 01:39:15.953799 | orchestrator | 2026-04-06 01:39:15.953805 | orchestrator | OpenTofu will perform the following actions: 2026-04-06 01:39:15.953820 | orchestrator | 2026-04-06 01:39:15.953826 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-06 01:39:15.953832 | orchestrator | # (config refers to values not yet known) 2026-04-06 01:39:15.953837 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-06 01:39:15.953843 | orchestrator | + checksum = (known after apply) 2026-04-06 01:39:15.953848 | orchestrator | + created_at = (known after apply) 2026-04-06 01:39:15.953854 | orchestrator | + file = (known after apply) 2026-04-06 01:39:15.953859 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.953885 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.953891 | orchestrator | + min_disk_gb = (known after apply) 2026-04-06 01:39:15.953897 | orchestrator | + min_ram_mb = (known after apply) 2026-04-06 01:39:15.953902 | orchestrator | + most_recent = true 2026-04-06 01:39:15.953907 | orchestrator | + name = (known after apply) 2026-04-06 01:39:15.953912 | orchestrator | + protected = (known after apply) 2026-04-06 01:39:15.953918 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.953926 | orchestrator | + schema = (known after apply) 2026-04-06 01:39:15.953931 | orchestrator | + size_bytes = (known after apply) 2026-04-06 01:39:15.953936 | orchestrator | + tags = (known after apply) 2026-04-06 01:39:15.953941 | orchestrator | + updated_at = (known after apply) 2026-04-06 01:39:15.953947 | orchestrator | } 2026-04-06 01:39:15.953952 | orchestrator | 2026-04-06 01:39:15.953957 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-06 01:39:15.953963 | orchestrator | # (config refers to values not yet known) 2026-04-06 01:39:15.953968 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-06 01:39:15.953973 | orchestrator | + checksum = (known after apply) 2026-04-06 01:39:15.953979 | orchestrator | + created_at = (known after apply) 2026-04-06 01:39:15.953984 | orchestrator | + file = (known after apply) 2026-04-06 01:39:15.953989 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.953994 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.953999 | orchestrator | + min_disk_gb = (known after apply) 2026-04-06 01:39:15.954004 | orchestrator | + min_ram_mb = (known after apply) 2026-04-06 01:39:15.954009 | orchestrator | + most_recent = true 2026-04-06 01:39:15.954032 | orchestrator | + name = (known after apply) 2026-04-06 01:39:15.954038 | orchestrator | + protected = (known after apply) 2026-04-06 01:39:15.954043 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.954048 | orchestrator | + schema = (known after apply) 2026-04-06 01:39:15.954053 | orchestrator | + size_bytes = (known after apply) 2026-04-06 01:39:15.954059 | orchestrator | + tags = (known after apply) 2026-04-06 01:39:15.954064 | orchestrator | + updated_at = (known after apply) 2026-04-06 01:39:15.954069 | orchestrator | } 2026-04-06 01:39:15.954077 | orchestrator | 2026-04-06 01:39:15.954083 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-06 01:39:15.954088 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-06 01:39:15.954094 | orchestrator | + content = (known after apply) 2026-04-06 01:39:15.954100 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-06 01:39:15.954105 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-06 01:39:15.954110 | orchestrator | + content_md5 = (known after apply) 2026-04-06 01:39:15.954115 | orchestrator | + content_sha1 = (known after apply) 2026-04-06 01:39:15.954120 | orchestrator | + content_sha256 = (known after apply) 2026-04-06 01:39:15.954125 | orchestrator | + content_sha512 = (known after apply) 2026-04-06 01:39:15.954131 | orchestrator | + directory_permission = "0777" 2026-04-06 01:39:15.954136 | orchestrator | + file_permission = "0644" 2026-04-06 01:39:15.954141 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-06 01:39:15.954146 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954151 | orchestrator | } 2026-04-06 01:39:15.954156 | orchestrator | 2026-04-06 01:39:15.954162 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-06 01:39:15.954167 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-06 01:39:15.954172 | orchestrator | + content = (known after apply) 2026-04-06 01:39:15.954177 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-06 01:39:15.954182 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-06 01:39:15.954187 | orchestrator | + content_md5 = (known after apply) 2026-04-06 01:39:15.954192 | orchestrator | + content_sha1 = (known after apply) 2026-04-06 01:39:15.954198 | orchestrator | + content_sha256 = (known after apply) 2026-04-06 01:39:15.954203 | orchestrator | + content_sha512 = (known after apply) 2026-04-06 01:39:15.954208 | orchestrator | + directory_permission = "0777" 2026-04-06 01:39:15.954213 | orchestrator | + file_permission = "0644" 2026-04-06 01:39:15.954226 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-06 01:39:15.954231 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954236 | orchestrator | } 2026-04-06 01:39:15.954241 | orchestrator | 2026-04-06 01:39:15.954255 | orchestrator | # local_file.inventory will be created 2026-04-06 01:39:15.954260 | orchestrator | + resource "local_file" "inventory" { 2026-04-06 01:39:15.954265 | orchestrator | + content = (known after apply) 2026-04-06 01:39:15.954271 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-06 01:39:15.954276 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-06 01:39:15.954281 | orchestrator | + content_md5 = (known after apply) 2026-04-06 01:39:15.954286 | orchestrator | + content_sha1 = (known after apply) 2026-04-06 01:39:15.954291 | orchestrator | + content_sha256 = (known after apply) 2026-04-06 01:39:15.954297 | orchestrator | + content_sha512 = (known after apply) 2026-04-06 01:39:15.954302 | orchestrator | + directory_permission = "0777" 2026-04-06 01:39:15.954307 | orchestrator | + file_permission = "0644" 2026-04-06 01:39:15.954312 | orchestrator | + filename = "inventory.ci" 2026-04-06 01:39:15.954317 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954322 | orchestrator | } 2026-04-06 01:39:15.954328 | orchestrator | 2026-04-06 01:39:15.954333 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-06 01:39:15.954338 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-06 01:39:15.954343 | orchestrator | + content = (sensitive value) 2026-04-06 01:39:15.954348 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-06 01:39:15.954353 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-06 01:39:15.954359 | orchestrator | + content_md5 = (known after apply) 2026-04-06 01:39:15.954364 | orchestrator | + content_sha1 = (known after apply) 2026-04-06 01:39:15.954369 | orchestrator | + content_sha256 = (known after apply) 2026-04-06 01:39:15.954374 | orchestrator | + content_sha512 = (known after apply) 2026-04-06 01:39:15.954379 | orchestrator | + directory_permission = "0700" 2026-04-06 01:39:15.954384 | orchestrator | + file_permission = "0600" 2026-04-06 01:39:15.954390 | orchestrator | + filename = ".id_rsa.ci" 2026-04-06 01:39:15.954395 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954400 | orchestrator | } 2026-04-06 01:39:15.954405 | orchestrator | 2026-04-06 01:39:15.954410 | orchestrator | # null_resource.node_semaphore will be created 2026-04-06 01:39:15.954415 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-06 01:39:15.954420 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954426 | orchestrator | } 2026-04-06 01:39:15.954434 | orchestrator | 2026-04-06 01:39:15.954439 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-06 01:39:15.954445 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-06 01:39:15.954450 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.954455 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.954460 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954465 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.954470 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.954476 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-06 01:39:15.954481 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.954486 | orchestrator | + size = 80 2026-04-06 01:39:15.954491 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.954496 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.954501 | orchestrator | } 2026-04-06 01:39:15.954506 | orchestrator | 2026-04-06 01:39:15.954512 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-06 01:39:15.954517 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-06 01:39:15.954522 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.954527 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.954532 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954542 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.954547 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.954552 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-06 01:39:15.954557 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.954563 | orchestrator | + size = 80 2026-04-06 01:39:15.954568 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.954573 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.954578 | orchestrator | } 2026-04-06 01:39:15.954583 | orchestrator | 2026-04-06 01:39:15.954588 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-06 01:39:15.954593 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-06 01:39:15.954599 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.954604 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.954609 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954614 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.954619 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.954624 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-06 01:39:15.954630 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.954635 | orchestrator | + size = 80 2026-04-06 01:39:15.954640 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.954645 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.954650 | orchestrator | } 2026-04-06 01:39:15.954655 | orchestrator | 2026-04-06 01:39:15.954661 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-06 01:39:15.954666 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-06 01:39:15.954671 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.954676 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.954716 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954721 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.954726 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.954731 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-06 01:39:15.954736 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.954742 | orchestrator | + size = 80 2026-04-06 01:39:15.954747 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.954752 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.954757 | orchestrator | } 2026-04-06 01:39:15.954762 | orchestrator | 2026-04-06 01:39:15.954767 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-06 01:39:15.954772 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-06 01:39:15.954778 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.954783 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.954788 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954793 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.954798 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.954806 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-06 01:39:15.954811 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.954817 | orchestrator | + size = 80 2026-04-06 01:39:15.954822 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.954827 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.954832 | orchestrator | } 2026-04-06 01:39:15.954837 | orchestrator | 2026-04-06 01:39:15.954842 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-06 01:39:15.954847 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-06 01:39:15.954853 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.954858 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.954863 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954872 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.954878 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.954883 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-06 01:39:15.954888 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.954893 | orchestrator | + size = 80 2026-04-06 01:39:15.954898 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.954903 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.954908 | orchestrator | } 2026-04-06 01:39:15.954913 | orchestrator | 2026-04-06 01:39:15.954918 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-06 01:39:15.954923 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-06 01:39:15.954928 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.954934 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.954939 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.954944 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.954949 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.954959 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-06 01:39:15.954964 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.954969 | orchestrator | + size = 80 2026-04-06 01:39:15.954974 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.954979 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.954984 | orchestrator | } 2026-04-06 01:39:15.954989 | orchestrator | 2026-04-06 01:39:15.954995 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-06 01:39:15.955000 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-06 01:39:15.955006 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.955011 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955016 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955021 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.955026 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-06 01:39:15.955031 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955036 | orchestrator | + size = 20 2026-04-06 01:39:15.955041 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.955047 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.955052 | orchestrator | } 2026-04-06 01:39:15.955057 | orchestrator | 2026-04-06 01:39:15.955062 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-06 01:39:15.955067 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-06 01:39:15.955072 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.955077 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955083 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955088 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.955093 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-06 01:39:15.955098 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955103 | orchestrator | + size = 20 2026-04-06 01:39:15.955108 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.955113 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.955118 | orchestrator | } 2026-04-06 01:39:15.955124 | orchestrator | 2026-04-06 01:39:15.955129 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-06 01:39:15.955134 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-06 01:39:15.955139 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.955144 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955149 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955155 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.955160 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-06 01:39:15.955165 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955174 | orchestrator | + size = 20 2026-04-06 01:39:15.955179 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.955184 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.955189 | orchestrator | } 2026-04-06 01:39:15.955194 | orchestrator | 2026-04-06 01:39:15.955199 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-06 01:39:15.955204 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-06 01:39:15.955209 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.955214 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955219 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955224 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.955230 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-06 01:39:15.955235 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955240 | orchestrator | + size = 20 2026-04-06 01:39:15.955245 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.955250 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.955255 | orchestrator | } 2026-04-06 01:39:15.955260 | orchestrator | 2026-04-06 01:39:15.955265 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-06 01:39:15.955270 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-06 01:39:15.955275 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.955280 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955286 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955291 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.955296 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-06 01:39:15.955301 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955309 | orchestrator | + size = 20 2026-04-06 01:39:15.955314 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.955319 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.955325 | orchestrator | } 2026-04-06 01:39:15.955330 | orchestrator | 2026-04-06 01:39:15.955335 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-06 01:39:15.955340 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-06 01:39:15.955345 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.955350 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955355 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955360 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.955365 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-06 01:39:15.955370 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955375 | orchestrator | + size = 20 2026-04-06 01:39:15.955381 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.955386 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.955391 | orchestrator | } 2026-04-06 01:39:15.955396 | orchestrator | 2026-04-06 01:39:15.955401 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-06 01:39:15.955406 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-06 01:39:15.955411 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.955417 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955422 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955427 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.955432 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-06 01:39:15.955437 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955442 | orchestrator | + size = 20 2026-04-06 01:39:15.955447 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.955452 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.955457 | orchestrator | } 2026-04-06 01:39:15.955463 | orchestrator | 2026-04-06 01:39:15.955471 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-06 01:39:15.955476 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-06 01:39:15.955485 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.955490 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955495 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955500 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.955505 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-06 01:39:15.955510 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955515 | orchestrator | + size = 20 2026-04-06 01:39:15.955521 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.955526 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.955531 | orchestrator | } 2026-04-06 01:39:15.955536 | orchestrator | 2026-04-06 01:39:15.955541 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-06 01:39:15.955546 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-06 01:39:15.955551 | orchestrator | + attachment = (known after apply) 2026-04-06 01:39:15.955556 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955561 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955567 | orchestrator | + metadata = (known after apply) 2026-04-06 01:39:15.955572 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-06 01:39:15.955577 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955582 | orchestrator | + size = 20 2026-04-06 01:39:15.955587 | orchestrator | + volume_retype_policy = "never" 2026-04-06 01:39:15.955592 | orchestrator | + volume_type = "ssd" 2026-04-06 01:39:15.955597 | orchestrator | } 2026-04-06 01:39:15.955602 | orchestrator | 2026-04-06 01:39:15.955607 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-06 01:39:15.955612 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-06 01:39:15.955617 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-06 01:39:15.955623 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-06 01:39:15.955628 | orchestrator | + all_metadata = (known after apply) 2026-04-06 01:39:15.955633 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.955638 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955643 | orchestrator | + config_drive = true 2026-04-06 01:39:15.955648 | orchestrator | + created = (known after apply) 2026-04-06 01:39:15.955653 | orchestrator | + flavor_id = (known after apply) 2026-04-06 01:39:15.955658 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-06 01:39:15.955663 | orchestrator | + force_delete = false 2026-04-06 01:39:15.955668 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-06 01:39:15.955673 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955693 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.955699 | orchestrator | + image_name = (known after apply) 2026-04-06 01:39:15.955704 | orchestrator | + key_pair = "testbed" 2026-04-06 01:39:15.955709 | orchestrator | + name = "testbed-manager" 2026-04-06 01:39:15.955714 | orchestrator | + power_state = "active" 2026-04-06 01:39:15.955719 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955724 | orchestrator | + security_groups = (known after apply) 2026-04-06 01:39:15.955729 | orchestrator | + stop_before_destroy = false 2026-04-06 01:39:15.955735 | orchestrator | + updated = (known after apply) 2026-04-06 01:39:15.955740 | orchestrator | + user_data = (sensitive value) 2026-04-06 01:39:15.955745 | orchestrator | 2026-04-06 01:39:15.955750 | orchestrator | + block_device { 2026-04-06 01:39:15.955755 | orchestrator | + boot_index = 0 2026-04-06 01:39:15.955760 | orchestrator | + delete_on_termination = false 2026-04-06 01:39:15.955768 | orchestrator | + destination_type = "volume" 2026-04-06 01:39:15.955774 | orchestrator | + multiattach = false 2026-04-06 01:39:15.955779 | orchestrator | + source_type = "volume" 2026-04-06 01:39:15.955784 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.955793 | orchestrator | } 2026-04-06 01:39:15.955798 | orchestrator | 2026-04-06 01:39:15.955803 | orchestrator | + network { 2026-04-06 01:39:15.955808 | orchestrator | + access_network = false 2026-04-06 01:39:15.955814 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-06 01:39:15.955819 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-06 01:39:15.955824 | orchestrator | + mac = (known after apply) 2026-04-06 01:39:15.955829 | orchestrator | + name = (known after apply) 2026-04-06 01:39:15.955834 | orchestrator | + port = (known after apply) 2026-04-06 01:39:15.955839 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.955844 | orchestrator | } 2026-04-06 01:39:15.955849 | orchestrator | } 2026-04-06 01:39:15.955854 | orchestrator | 2026-04-06 01:39:15.955859 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-06 01:39:15.955865 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-06 01:39:15.955870 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-06 01:39:15.955875 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-06 01:39:15.955880 | orchestrator | + all_metadata = (known after apply) 2026-04-06 01:39:15.955885 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.955890 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.955895 | orchestrator | + config_drive = true 2026-04-06 01:39:15.955900 | orchestrator | + created = (known after apply) 2026-04-06 01:39:15.955905 | orchestrator | + flavor_id = (known after apply) 2026-04-06 01:39:15.955910 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-06 01:39:15.955915 | orchestrator | + force_delete = false 2026-04-06 01:39:15.955921 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-06 01:39:15.955926 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.955931 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.955936 | orchestrator | + image_name = (known after apply) 2026-04-06 01:39:15.955941 | orchestrator | + key_pair = "testbed" 2026-04-06 01:39:15.955946 | orchestrator | + name = "testbed-node-0" 2026-04-06 01:39:15.955951 | orchestrator | + power_state = "active" 2026-04-06 01:39:15.955956 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.955961 | orchestrator | + security_groups = (known after apply) 2026-04-06 01:39:15.955967 | orchestrator | + stop_before_destroy = false 2026-04-06 01:39:15.955972 | orchestrator | + updated = (known after apply) 2026-04-06 01:39:15.955977 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-06 01:39:15.955982 | orchestrator | 2026-04-06 01:39:15.955987 | orchestrator | + block_device { 2026-04-06 01:39:15.955992 | orchestrator | + boot_index = 0 2026-04-06 01:39:15.956000 | orchestrator | + delete_on_termination = false 2026-04-06 01:39:15.956006 | orchestrator | + destination_type = "volume" 2026-04-06 01:39:15.956011 | orchestrator | + multiattach = false 2026-04-06 01:39:15.956016 | orchestrator | + source_type = "volume" 2026-04-06 01:39:15.956021 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.956026 | orchestrator | } 2026-04-06 01:39:15.956031 | orchestrator | 2026-04-06 01:39:15.956036 | orchestrator | + network { 2026-04-06 01:39:15.956041 | orchestrator | + access_network = false 2026-04-06 01:39:15.956047 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-06 01:39:15.956052 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-06 01:39:15.956057 | orchestrator | + mac = (known after apply) 2026-04-06 01:39:15.956062 | orchestrator | + name = (known after apply) 2026-04-06 01:39:15.956067 | orchestrator | + port = (known after apply) 2026-04-06 01:39:15.956072 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.956077 | orchestrator | } 2026-04-06 01:39:15.956083 | orchestrator | } 2026-04-06 01:39:15.956088 | orchestrator | 2026-04-06 01:39:15.956093 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-06 01:39:15.956098 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-06 01:39:15.956103 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-06 01:39:15.956112 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-06 01:39:15.956117 | orchestrator | + all_metadata = (known after apply) 2026-04-06 01:39:15.956122 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.956127 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.956132 | orchestrator | + config_drive = true 2026-04-06 01:39:15.956138 | orchestrator | + created = (known after apply) 2026-04-06 01:39:15.956143 | orchestrator | + flavor_id = (known after apply) 2026-04-06 01:39:15.956148 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-06 01:39:15.956153 | orchestrator | + force_delete = false 2026-04-06 01:39:15.956158 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-06 01:39:15.956163 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.956168 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.956173 | orchestrator | + image_name = (known after apply) 2026-04-06 01:39:15.956179 | orchestrator | + key_pair = "testbed" 2026-04-06 01:39:15.956184 | orchestrator | + name = "testbed-node-1" 2026-04-06 01:39:15.956189 | orchestrator | + power_state = "active" 2026-04-06 01:39:15.956194 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.956199 | orchestrator | + security_groups = (known after apply) 2026-04-06 01:39:15.956204 | orchestrator | + stop_before_destroy = false 2026-04-06 01:39:15.956209 | orchestrator | + updated = (known after apply) 2026-04-06 01:39:15.956214 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-06 01:39:15.956219 | orchestrator | 2026-04-06 01:39:15.956225 | orchestrator | + block_device { 2026-04-06 01:39:15.956230 | orchestrator | + boot_index = 0 2026-04-06 01:39:15.956235 | orchestrator | + delete_on_termination = false 2026-04-06 01:39:15.956240 | orchestrator | + destination_type = "volume" 2026-04-06 01:39:15.956245 | orchestrator | + multiattach = false 2026-04-06 01:39:15.956250 | orchestrator | + source_type = "volume" 2026-04-06 01:39:15.956255 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.956260 | orchestrator | } 2026-04-06 01:39:15.956265 | orchestrator | 2026-04-06 01:39:15.956270 | orchestrator | + network { 2026-04-06 01:39:15.956275 | orchestrator | + access_network = false 2026-04-06 01:39:15.956281 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-06 01:39:15.956286 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-06 01:39:15.956291 | orchestrator | + mac = (known after apply) 2026-04-06 01:39:15.956296 | orchestrator | + name = (known after apply) 2026-04-06 01:39:15.956301 | orchestrator | + port = (known after apply) 2026-04-06 01:39:15.956306 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.956311 | orchestrator | } 2026-04-06 01:39:15.956316 | orchestrator | } 2026-04-06 01:39:15.956321 | orchestrator | 2026-04-06 01:39:15.956327 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-06 01:39:15.956332 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-06 01:39:15.956337 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-06 01:39:15.956342 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-06 01:39:15.956347 | orchestrator | + all_metadata = (known after apply) 2026-04-06 01:39:15.956353 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.956361 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.956366 | orchestrator | + config_drive = true 2026-04-06 01:39:15.956372 | orchestrator | + created = (known after apply) 2026-04-06 01:39:15.956377 | orchestrator | + flavor_id = (known after apply) 2026-04-06 01:39:15.956382 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-06 01:39:15.956387 | orchestrator | + force_delete = false 2026-04-06 01:39:15.956392 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-06 01:39:15.956397 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.956402 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.956412 | orchestrator | + image_name = (known after apply) 2026-04-06 01:39:15.956417 | orchestrator | + key_pair = "testbed" 2026-04-06 01:39:15.956422 | orchestrator | + name = "testbed-node-2" 2026-04-06 01:39:15.956427 | orchestrator | + power_state = "active" 2026-04-06 01:39:15.956432 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.956437 | orchestrator | + security_groups = (known after apply) 2026-04-06 01:39:15.956442 | orchestrator | + stop_before_destroy = false 2026-04-06 01:39:15.956447 | orchestrator | + updated = (known after apply) 2026-04-06 01:39:15.956452 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-06 01:39:15.956457 | orchestrator | 2026-04-06 01:39:15.956462 | orchestrator | + block_device { 2026-04-06 01:39:15.956467 | orchestrator | + boot_index = 0 2026-04-06 01:39:15.956473 | orchestrator | + delete_on_termination = false 2026-04-06 01:39:15.956478 | orchestrator | + destination_type = "volume" 2026-04-06 01:39:15.956483 | orchestrator | + multiattach = false 2026-04-06 01:39:15.956488 | orchestrator | + source_type = "volume" 2026-04-06 01:39:15.956493 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.956498 | orchestrator | } 2026-04-06 01:39:15.956503 | orchestrator | 2026-04-06 01:39:15.956508 | orchestrator | + network { 2026-04-06 01:39:15.956513 | orchestrator | + access_network = false 2026-04-06 01:39:15.956518 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-06 01:39:15.956523 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-06 01:39:15.956528 | orchestrator | + mac = (known after apply) 2026-04-06 01:39:15.956536 | orchestrator | + name = (known after apply) 2026-04-06 01:39:15.956542 | orchestrator | + port = (known after apply) 2026-04-06 01:39:15.956547 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.956552 | orchestrator | } 2026-04-06 01:39:15.956557 | orchestrator | } 2026-04-06 01:39:15.956562 | orchestrator | 2026-04-06 01:39:15.956567 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-06 01:39:15.956572 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-06 01:39:15.956578 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-06 01:39:15.956583 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-06 01:39:15.956588 | orchestrator | + all_metadata = (known after apply) 2026-04-06 01:39:15.956593 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.956598 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.956603 | orchestrator | + config_drive = true 2026-04-06 01:39:15.956608 | orchestrator | + created = (known after apply) 2026-04-06 01:39:15.956613 | orchestrator | + flavor_id = (known after apply) 2026-04-06 01:39:15.956618 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-06 01:39:15.956623 | orchestrator | + force_delete = false 2026-04-06 01:39:15.956628 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-06 01:39:15.956633 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.956639 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.956644 | orchestrator | + image_name = (known after apply) 2026-04-06 01:39:15.956649 | orchestrator | + key_pair = "testbed" 2026-04-06 01:39:15.956654 | orchestrator | + name = "testbed-node-3" 2026-04-06 01:39:15.956659 | orchestrator | + power_state = "active" 2026-04-06 01:39:15.956664 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.956669 | orchestrator | + security_groups = (known after apply) 2026-04-06 01:39:15.956674 | orchestrator | + stop_before_destroy = false 2026-04-06 01:39:15.956692 | orchestrator | + updated = (known after apply) 2026-04-06 01:39:15.956698 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-06 01:39:15.956703 | orchestrator | 2026-04-06 01:39:15.956708 | orchestrator | + block_device { 2026-04-06 01:39:15.956720 | orchestrator | + boot_index = 0 2026-04-06 01:39:15.956725 | orchestrator | + delete_on_termination = false 2026-04-06 01:39:15.956730 | orchestrator | + destination_type = "volume" 2026-04-06 01:39:15.956740 | orchestrator | + multiattach = false 2026-04-06 01:39:15.956745 | orchestrator | + source_type = "volume" 2026-04-06 01:39:15.956750 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.956755 | orchestrator | } 2026-04-06 01:39:15.956760 | orchestrator | 2026-04-06 01:39:15.956765 | orchestrator | + network { 2026-04-06 01:39:15.956770 | orchestrator | + access_network = false 2026-04-06 01:39:15.956775 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-06 01:39:15.956780 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-06 01:39:15.956785 | orchestrator | + mac = (known after apply) 2026-04-06 01:39:15.956790 | orchestrator | + name = (known after apply) 2026-04-06 01:39:15.956795 | orchestrator | + port = (known after apply) 2026-04-06 01:39:15.956800 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.956806 | orchestrator | } 2026-04-06 01:39:15.956811 | orchestrator | } 2026-04-06 01:39:15.956816 | orchestrator | 2026-04-06 01:39:15.956821 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-06 01:39:15.956826 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-06 01:39:15.956832 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-06 01:39:15.956837 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-06 01:39:15.956842 | orchestrator | + all_metadata = (known after apply) 2026-04-06 01:39:15.956847 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.956852 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.956857 | orchestrator | + config_drive = true 2026-04-06 01:39:15.956862 | orchestrator | + created = (known after apply) 2026-04-06 01:39:15.956867 | orchestrator | + flavor_id = (known after apply) 2026-04-06 01:39:15.956872 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-06 01:39:15.956877 | orchestrator | + force_delete = false 2026-04-06 01:39:15.956882 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-06 01:39:15.956887 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.956892 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.956897 | orchestrator | + image_name = (known after apply) 2026-04-06 01:39:15.956902 | orchestrator | + key_pair = "testbed" 2026-04-06 01:39:15.956907 | orchestrator | + name = "testbed-node-4" 2026-04-06 01:39:15.956913 | orchestrator | + power_state = "active" 2026-04-06 01:39:15.956918 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.956923 | orchestrator | + security_groups = (known after apply) 2026-04-06 01:39:15.956928 | orchestrator | + stop_before_destroy = false 2026-04-06 01:39:15.956933 | orchestrator | + updated = (known after apply) 2026-04-06 01:39:15.956938 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-06 01:39:15.956943 | orchestrator | 2026-04-06 01:39:15.956948 | orchestrator | + block_device { 2026-04-06 01:39:15.956953 | orchestrator | + boot_index = 0 2026-04-06 01:39:15.956958 | orchestrator | + delete_on_termination = false 2026-04-06 01:39:15.956963 | orchestrator | + destination_type = "volume" 2026-04-06 01:39:15.956968 | orchestrator | + multiattach = false 2026-04-06 01:39:15.956973 | orchestrator | + source_type = "volume" 2026-04-06 01:39:15.956978 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.956984 | orchestrator | } 2026-04-06 01:39:15.956989 | orchestrator | 2026-04-06 01:39:15.956994 | orchestrator | + network { 2026-04-06 01:39:15.956999 | orchestrator | + access_network = false 2026-04-06 01:39:15.957004 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-06 01:39:15.957009 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-06 01:39:15.957014 | orchestrator | + mac = (known after apply) 2026-04-06 01:39:15.957019 | orchestrator | + name = (known after apply) 2026-04-06 01:39:15.957024 | orchestrator | + port = (known after apply) 2026-04-06 01:39:15.957029 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.957034 | orchestrator | } 2026-04-06 01:39:15.957040 | orchestrator | } 2026-04-06 01:39:15.957049 | orchestrator | 2026-04-06 01:39:15.957054 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-06 01:39:15.957059 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-06 01:39:15.957064 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-06 01:39:15.957069 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-06 01:39:15.957074 | orchestrator | + all_metadata = (known after apply) 2026-04-06 01:39:15.957082 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.957087 | orchestrator | + availability_zone = "nova" 2026-04-06 01:39:15.957092 | orchestrator | + config_drive = true 2026-04-06 01:39:15.957097 | orchestrator | + created = (known after apply) 2026-04-06 01:39:15.957102 | orchestrator | + flavor_id = (known after apply) 2026-04-06 01:39:15.957107 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-06 01:39:15.957112 | orchestrator | + force_delete = false 2026-04-06 01:39:15.957121 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-06 01:39:15.957126 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.957131 | orchestrator | + image_id = (known after apply) 2026-04-06 01:39:15.957136 | orchestrator | + image_name = (known after apply) 2026-04-06 01:39:15.957142 | orchestrator | + key_pair = "testbed" 2026-04-06 01:39:15.957147 | orchestrator | + name = "testbed-node-5" 2026-04-06 01:39:15.957152 | orchestrator | + power_state = "active" 2026-04-06 01:39:15.957157 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.957162 | orchestrator | + security_groups = (known after apply) 2026-04-06 01:39:15.957167 | orchestrator | + stop_before_destroy = false 2026-04-06 01:39:15.957172 | orchestrator | + updated = (known after apply) 2026-04-06 01:39:15.957177 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-06 01:39:15.957182 | orchestrator | 2026-04-06 01:39:15.957187 | orchestrator | + block_device { 2026-04-06 01:39:15.957192 | orchestrator | + boot_index = 0 2026-04-06 01:39:15.957197 | orchestrator | + delete_on_termination = false 2026-04-06 01:39:15.957202 | orchestrator | + destination_type = "volume" 2026-04-06 01:39:15.957207 | orchestrator | + multiattach = false 2026-04-06 01:39:15.957212 | orchestrator | + source_type = "volume" 2026-04-06 01:39:15.957217 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.957222 | orchestrator | } 2026-04-06 01:39:15.957228 | orchestrator | 2026-04-06 01:39:15.957233 | orchestrator | + network { 2026-04-06 01:39:15.957238 | orchestrator | + access_network = false 2026-04-06 01:39:15.957243 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-06 01:39:15.957248 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-06 01:39:15.957253 | orchestrator | + mac = (known after apply) 2026-04-06 01:39:15.957258 | orchestrator | + name = (known after apply) 2026-04-06 01:39:15.957263 | orchestrator | + port = (known after apply) 2026-04-06 01:39:15.957268 | orchestrator | + uuid = (known after apply) 2026-04-06 01:39:15.957274 | orchestrator | } 2026-04-06 01:39:15.957279 | orchestrator | } 2026-04-06 01:39:15.957284 | orchestrator | 2026-04-06 01:39:15.957289 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-06 01:39:15.957294 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-06 01:39:15.957299 | orchestrator | + fingerprint = (known after apply) 2026-04-06 01:39:15.957304 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.957309 | orchestrator | + name = "testbed" 2026-04-06 01:39:15.957314 | orchestrator | + private_key = (sensitive value) 2026-04-06 01:39:15.957319 | orchestrator | + public_key = (known after apply) 2026-04-06 01:39:15.957325 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.957330 | orchestrator | + user_id = (known after apply) 2026-04-06 01:39:15.957335 | orchestrator | } 2026-04-06 01:39:15.957340 | orchestrator | 2026-04-06 01:39:15.957345 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-06 01:39:15.957350 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-06 01:39:15.957360 | orchestrator | + device = (known after apply) 2026-04-06 01:39:15.957365 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.957370 | orchestrator | + instance_id = (known after apply) 2026-04-06 01:39:15.957375 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.957380 | orchestrator | + volume_id = (known after apply) 2026-04-06 01:39:15.957385 | orchestrator | } 2026-04-06 01:39:15.957390 | orchestrator | 2026-04-06 01:39:15.957396 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-06 01:39:15.957401 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-06 01:39:15.957406 | orchestrator | + device = (known after apply) 2026-04-06 01:39:15.957411 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.957416 | orchestrator | + instance_id = (known after apply) 2026-04-06 01:39:15.957421 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.957426 | orchestrator | + volume_id = (known after apply) 2026-04-06 01:39:15.957431 | orchestrator | } 2026-04-06 01:39:15.957436 | orchestrator | 2026-04-06 01:39:15.957442 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-06 01:39:15.957447 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-06 01:39:15.957452 | orchestrator | + device = (known after apply) 2026-04-06 01:39:15.957457 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.957462 | orchestrator | + instance_id = (known after apply) 2026-04-06 01:39:15.957467 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.957472 | orchestrator | + volume_id = (known after apply) 2026-04-06 01:39:15.957477 | orchestrator | } 2026-04-06 01:39:15.957482 | orchestrator | 2026-04-06 01:39:15.957488 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-06 01:39:15.957493 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-06 01:39:15.957498 | orchestrator | + device = (known after apply) 2026-04-06 01:39:15.957503 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.957508 | orchestrator | + instance_id = (known after apply) 2026-04-06 01:39:15.957513 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.957518 | orchestrator | + volume_id = (known after apply) 2026-04-06 01:39:15.957523 | orchestrator | } 2026-04-06 01:39:15.957528 | orchestrator | 2026-04-06 01:39:15.957534 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-06 01:39:15.957539 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-06 01:39:15.957544 | orchestrator | + device = (known after apply) 2026-04-06 01:39:15.957754 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.958096 | orchestrator | + instance_id = (known after apply) 2026-04-06 01:39:15.958241 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.958258 | orchestrator | + volume_id = (known after apply) 2026-04-06 01:39:15.958266 | orchestrator | } 2026-04-06 01:39:15.958273 | orchestrator | 2026-04-06 01:39:15.958281 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-06 01:39:15.958289 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-06 01:39:15.958296 | orchestrator | + device = (known after apply) 2026-04-06 01:39:15.958303 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.958309 | orchestrator | + instance_id = (known after apply) 2026-04-06 01:39:15.958333 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.958340 | orchestrator | + volume_id = (known after apply) 2026-04-06 01:39:15.958347 | orchestrator | } 2026-04-06 01:39:15.958353 | orchestrator | 2026-04-06 01:39:15.958359 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-06 01:39:15.958366 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-06 01:39:15.958372 | orchestrator | + device = (known after apply) 2026-04-06 01:39:15.958379 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.958386 | orchestrator | + instance_id = (known after apply) 2026-04-06 01:39:15.958393 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.958415 | orchestrator | + volume_id = (known after apply) 2026-04-06 01:39:15.958423 | orchestrator | } 2026-04-06 01:39:15.958430 | orchestrator | 2026-04-06 01:39:15.958438 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-06 01:39:15.958445 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-06 01:39:15.958452 | orchestrator | + device = (known after apply) 2026-04-06 01:39:15.958460 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.958468 | orchestrator | + instance_id = (known after apply) 2026-04-06 01:39:15.958475 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.958482 | orchestrator | + volume_id = (known after apply) 2026-04-06 01:39:15.958489 | orchestrator | } 2026-04-06 01:39:15.958497 | orchestrator | 2026-04-06 01:39:15.958504 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-06 01:39:15.958511 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-06 01:39:15.958519 | orchestrator | + device = (known after apply) 2026-04-06 01:39:15.958526 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.958533 | orchestrator | + instance_id = (known after apply) 2026-04-06 01:39:15.958540 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.958547 | orchestrator | + volume_id = (known after apply) 2026-04-06 01:39:15.958554 | orchestrator | } 2026-04-06 01:39:15.958562 | orchestrator | 2026-04-06 01:39:15.958569 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-06 01:39:15.958577 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-06 01:39:15.958585 | orchestrator | + fixed_ip = (known after apply) 2026-04-06 01:39:15.958591 | orchestrator | + floating_ip = (known after apply) 2026-04-06 01:39:15.958598 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.958606 | orchestrator | + port_id = (known after apply) 2026-04-06 01:39:15.958610 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.958614 | orchestrator | } 2026-04-06 01:39:15.958619 | orchestrator | 2026-04-06 01:39:15.958624 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-06 01:39:15.958628 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-06 01:39:15.958633 | orchestrator | + address = (known after apply) 2026-04-06 01:39:15.958637 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.958642 | orchestrator | + dns_domain = (known after apply) 2026-04-06 01:39:15.958646 | orchestrator | + dns_name = (known after apply) 2026-04-06 01:39:15.958650 | orchestrator | + fixed_ip = (known after apply) 2026-04-06 01:39:15.958655 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.958659 | orchestrator | + pool = "public" 2026-04-06 01:39:15.958665 | orchestrator | + port_id = (known after apply) 2026-04-06 01:39:15.958669 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.958673 | orchestrator | + subnet_id = (known after apply) 2026-04-06 01:39:15.958703 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.958709 | orchestrator | } 2026-04-06 01:39:15.958713 | orchestrator | 2026-04-06 01:39:15.958718 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-06 01:39:15.958723 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-06 01:39:15.958728 | orchestrator | + admin_state_up = (known after apply) 2026-04-06 01:39:15.958732 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.958737 | orchestrator | + availability_zone_hints = [ 2026-04-06 01:39:15.958741 | orchestrator | + "nova", 2026-04-06 01:39:15.958746 | orchestrator | ] 2026-04-06 01:39:15.958751 | orchestrator | + dns_domain = (known after apply) 2026-04-06 01:39:15.958755 | orchestrator | + external = (known after apply) 2026-04-06 01:39:15.958760 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.958764 | orchestrator | + mtu = (known after apply) 2026-04-06 01:39:15.958768 | orchestrator | + name = "net-testbed-management" 2026-04-06 01:39:15.958773 | orchestrator | + port_security_enabled = (known after apply) 2026-04-06 01:39:15.958783 | orchestrator | + qos_policy_id = (known after apply) 2026-04-06 01:39:15.958788 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.958792 | orchestrator | + shared = (known after apply) 2026-04-06 01:39:15.958797 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.958801 | orchestrator | + transparent_vlan = (known after apply) 2026-04-06 01:39:15.958806 | orchestrator | 2026-04-06 01:39:15.958810 | orchestrator | + segments (known after apply) 2026-04-06 01:39:15.958815 | orchestrator | } 2026-04-06 01:39:15.958819 | orchestrator | 2026-04-06 01:39:15.958824 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-06 01:39:15.958829 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-06 01:39:15.958833 | orchestrator | + admin_state_up = (known after apply) 2026-04-06 01:39:15.958838 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-06 01:39:15.958842 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-06 01:39:15.958852 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.958857 | orchestrator | + device_id = (known after apply) 2026-04-06 01:39:15.958862 | orchestrator | + device_owner = (known after apply) 2026-04-06 01:39:15.958866 | orchestrator | + dns_assignment = (known after apply) 2026-04-06 01:39:15.958870 | orchestrator | + dns_name = (known after apply) 2026-04-06 01:39:15.958875 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.958879 | orchestrator | + mac_address = (known after apply) 2026-04-06 01:39:15.958884 | orchestrator | + network_id = (known after apply) 2026-04-06 01:39:15.958888 | orchestrator | + port_security_enabled = (known after apply) 2026-04-06 01:39:15.958893 | orchestrator | + qos_policy_id = (known after apply) 2026-04-06 01:39:15.958897 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.958912 | orchestrator | + security_group_ids = (known after apply) 2026-04-06 01:39:15.958917 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.958922 | orchestrator | 2026-04-06 01:39:15.958926 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.958932 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-06 01:39:15.958940 | orchestrator | } 2026-04-06 01:39:15.958947 | orchestrator | 2026-04-06 01:39:15.958954 | orchestrator | + binding (known after apply) 2026-04-06 01:39:15.958962 | orchestrator | 2026-04-06 01:39:15.958970 | orchestrator | + fixed_ip { 2026-04-06 01:39:15.958976 | orchestrator | + ip_address = "192.168.16.5" 2026-04-06 01:39:15.958983 | orchestrator | + subnet_id = (known after apply) 2026-04-06 01:39:15.958990 | orchestrator | } 2026-04-06 01:39:15.958998 | orchestrator | } 2026-04-06 01:39:15.959006 | orchestrator | 2026-04-06 01:39:15.959013 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-06 01:39:15.959020 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-06 01:39:15.959028 | orchestrator | + admin_state_up = (known after apply) 2026-04-06 01:39:15.959033 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-06 01:39:15.959037 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-06 01:39:15.959041 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.959047 | orchestrator | + device_id = (known after apply) 2026-04-06 01:39:15.959054 | orchestrator | + device_owner = (known after apply) 2026-04-06 01:39:15.959061 | orchestrator | + dns_assignment = (known after apply) 2026-04-06 01:39:15.959067 | orchestrator | + dns_name = (known after apply) 2026-04-06 01:39:15.959075 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.959082 | orchestrator | + mac_address = (known after apply) 2026-04-06 01:39:15.959089 | orchestrator | + network_id = (known after apply) 2026-04-06 01:39:15.959095 | orchestrator | + port_security_enabled = (known after apply) 2026-04-06 01:39:15.959102 | orchestrator | + qos_policy_id = (known after apply) 2026-04-06 01:39:15.959110 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.959124 | orchestrator | + security_group_ids = (known after apply) 2026-04-06 01:39:15.959132 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.959140 | orchestrator | 2026-04-06 01:39:15.959148 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959155 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-06 01:39:15.959163 | orchestrator | } 2026-04-06 01:39:15.959171 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959178 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-06 01:39:15.959182 | orchestrator | } 2026-04-06 01:39:15.959187 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959192 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-06 01:39:15.959196 | orchestrator | } 2026-04-06 01:39:15.959201 | orchestrator | 2026-04-06 01:39:15.959205 | orchestrator | + binding (known after apply) 2026-04-06 01:39:15.959210 | orchestrator | 2026-04-06 01:39:15.959214 | orchestrator | + fixed_ip { 2026-04-06 01:39:15.959219 | orchestrator | + ip_address = "192.168.16.10" 2026-04-06 01:39:15.959224 | orchestrator | + subnet_id = (known after apply) 2026-04-06 01:39:15.959229 | orchestrator | } 2026-04-06 01:39:15.959233 | orchestrator | } 2026-04-06 01:39:15.959237 | orchestrator | 2026-04-06 01:39:15.959242 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-06 01:39:15.959247 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-06 01:39:15.959251 | orchestrator | + admin_state_up = (known after apply) 2026-04-06 01:39:15.959256 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-06 01:39:15.959261 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-06 01:39:15.959265 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.959270 | orchestrator | + device_id = (known after apply) 2026-04-06 01:39:15.959274 | orchestrator | + device_owner = (known after apply) 2026-04-06 01:39:15.959279 | orchestrator | + dns_assignment = (known after apply) 2026-04-06 01:39:15.959284 | orchestrator | + dns_name = (known after apply) 2026-04-06 01:39:15.959288 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.959292 | orchestrator | + mac_address = (known after apply) 2026-04-06 01:39:15.959297 | orchestrator | + network_id = (known after apply) 2026-04-06 01:39:15.959304 | orchestrator | + port_security_enabled = (known after apply) 2026-04-06 01:39:15.959309 | orchestrator | + qos_policy_id = (known after apply) 2026-04-06 01:39:15.959314 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.959318 | orchestrator | + security_group_ids = (known after apply) 2026-04-06 01:39:15.959323 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.959328 | orchestrator | 2026-04-06 01:39:15.959333 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959337 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-06 01:39:15.959342 | orchestrator | } 2026-04-06 01:39:15.959347 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959351 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-06 01:39:15.959356 | orchestrator | } 2026-04-06 01:39:15.959360 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959365 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-06 01:39:15.959370 | orchestrator | } 2026-04-06 01:39:15.959374 | orchestrator | 2026-04-06 01:39:15.959379 | orchestrator | + binding (known after apply) 2026-04-06 01:39:15.959383 | orchestrator | 2026-04-06 01:39:15.959388 | orchestrator | + fixed_ip { 2026-04-06 01:39:15.959393 | orchestrator | + ip_address = "192.168.16.11" 2026-04-06 01:39:15.959397 | orchestrator | + subnet_id = (known after apply) 2026-04-06 01:39:15.959401 | orchestrator | } 2026-04-06 01:39:15.959406 | orchestrator | } 2026-04-06 01:39:15.959411 | orchestrator | 2026-04-06 01:39:15.959415 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-06 01:39:15.959420 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-06 01:39:15.959424 | orchestrator | + admin_state_up = (known after apply) 2026-04-06 01:39:15.959429 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-06 01:39:15.959434 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-06 01:39:15.959439 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.959449 | orchestrator | + device_id = (known after apply) 2026-04-06 01:39:15.959454 | orchestrator | + device_owner = (known after apply) 2026-04-06 01:39:15.959459 | orchestrator | + dns_assignment = (known after apply) 2026-04-06 01:39:15.959463 | orchestrator | + dns_name = (known after apply) 2026-04-06 01:39:15.959473 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.959478 | orchestrator | + mac_address = (known after apply) 2026-04-06 01:39:15.959482 | orchestrator | + network_id = (known after apply) 2026-04-06 01:39:15.959487 | orchestrator | + port_security_enabled = (known after apply) 2026-04-06 01:39:15.959492 | orchestrator | + qos_policy_id = (known after apply) 2026-04-06 01:39:15.959496 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.959501 | orchestrator | + security_group_ids = (known after apply) 2026-04-06 01:39:15.959512 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.959517 | orchestrator | 2026-04-06 01:39:15.959522 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959527 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-06 01:39:15.959532 | orchestrator | } 2026-04-06 01:39:15.959537 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959542 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-06 01:39:15.959547 | orchestrator | } 2026-04-06 01:39:15.959551 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959556 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-06 01:39:15.959560 | orchestrator | } 2026-04-06 01:39:15.959565 | orchestrator | 2026-04-06 01:39:15.959569 | orchestrator | + binding (known after apply) 2026-04-06 01:39:15.959575 | orchestrator | 2026-04-06 01:39:15.959579 | orchestrator | + fixed_ip { 2026-04-06 01:39:15.959584 | orchestrator | + ip_address = "192.168.16.12" 2026-04-06 01:39:15.959588 | orchestrator | + subnet_id = (known after apply) 2026-04-06 01:39:15.959593 | orchestrator | } 2026-04-06 01:39:15.959598 | orchestrator | } 2026-04-06 01:39:15.959602 | orchestrator | 2026-04-06 01:39:15.959607 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-06 01:39:15.959611 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-06 01:39:15.959616 | orchestrator | + admin_state_up = (known after apply) 2026-04-06 01:39:15.959620 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-06 01:39:15.959625 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-06 01:39:15.959630 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.959634 | orchestrator | + device_id = (known after apply) 2026-04-06 01:39:15.959639 | orchestrator | + device_owner = (known after apply) 2026-04-06 01:39:15.959643 | orchestrator | + dns_assignment = (known after apply) 2026-04-06 01:39:15.959647 | orchestrator | + dns_name = (known after apply) 2026-04-06 01:39:15.959652 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.959656 | orchestrator | + mac_address = (known after apply) 2026-04-06 01:39:15.959660 | orchestrator | + network_id = (known after apply) 2026-04-06 01:39:15.959666 | orchestrator | + port_security_enabled = (known after apply) 2026-04-06 01:39:15.959670 | orchestrator | + qos_policy_id = (known after apply) 2026-04-06 01:39:15.959674 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.959717 | orchestrator | + security_group_ids = (known after apply) 2026-04-06 01:39:15.959724 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.959729 | orchestrator | 2026-04-06 01:39:15.959733 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959738 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-06 01:39:15.959743 | orchestrator | } 2026-04-06 01:39:15.959748 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959753 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-06 01:39:15.959757 | orchestrator | } 2026-04-06 01:39:15.959762 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959767 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-06 01:39:15.959771 | orchestrator | } 2026-04-06 01:39:15.959775 | orchestrator | 2026-04-06 01:39:15.959785 | orchestrator | + binding (known after apply) 2026-04-06 01:39:15.959789 | orchestrator | 2026-04-06 01:39:15.959794 | orchestrator | + fixed_ip { 2026-04-06 01:39:15.959799 | orchestrator | + ip_address = "192.168.16.13" 2026-04-06 01:39:15.959804 | orchestrator | + subnet_id = (known after apply) 2026-04-06 01:39:15.959809 | orchestrator | } 2026-04-06 01:39:15.959813 | orchestrator | } 2026-04-06 01:39:15.959817 | orchestrator | 2026-04-06 01:39:15.959822 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-06 01:39:15.959827 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-06 01:39:15.959831 | orchestrator | + admin_state_up = (known after apply) 2026-04-06 01:39:15.959835 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-06 01:39:15.959840 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-06 01:39:15.959844 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.959849 | orchestrator | + device_id = (known after apply) 2026-04-06 01:39:15.959853 | orchestrator | + device_owner = (known after apply) 2026-04-06 01:39:15.959857 | orchestrator | + dns_assignment = (known after apply) 2026-04-06 01:39:15.959862 | orchestrator | + dns_name = (known after apply) 2026-04-06 01:39:15.959866 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.959870 | orchestrator | + mac_address = (known after apply) 2026-04-06 01:39:15.959875 | orchestrator | + network_id = (known after apply) 2026-04-06 01:39:15.959879 | orchestrator | + port_security_enabled = (known after apply) 2026-04-06 01:39:15.959884 | orchestrator | + qos_policy_id = (known after apply) 2026-04-06 01:39:15.959888 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.959893 | orchestrator | + security_group_ids = (known after apply) 2026-04-06 01:39:15.959897 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.959905 | orchestrator | 2026-04-06 01:39:15.959909 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959914 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-06 01:39:15.959918 | orchestrator | } 2026-04-06 01:39:15.959923 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959928 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-06 01:39:15.959932 | orchestrator | } 2026-04-06 01:39:15.959936 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.959940 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-06 01:39:15.959945 | orchestrator | } 2026-04-06 01:39:15.959949 | orchestrator | 2026-04-06 01:39:15.959953 | orchestrator | + binding (known after apply) 2026-04-06 01:39:15.959958 | orchestrator | 2026-04-06 01:39:15.959962 | orchestrator | + fixed_ip { 2026-04-06 01:39:15.959967 | orchestrator | + ip_address = "192.168.16.14" 2026-04-06 01:39:15.959972 | orchestrator | + subnet_id = (known after apply) 2026-04-06 01:39:15.959977 | orchestrator | } 2026-04-06 01:39:15.959981 | orchestrator | } 2026-04-06 01:39:15.959985 | orchestrator | 2026-04-06 01:39:15.959990 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-06 01:39:15.959994 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-06 01:39:15.959998 | orchestrator | + admin_state_up = (known after apply) 2026-04-06 01:39:15.960003 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-06 01:39:15.960007 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-06 01:39:15.960012 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.960016 | orchestrator | + device_id = (known after apply) 2026-04-06 01:39:15.960020 | orchestrator | + device_owner = (known after apply) 2026-04-06 01:39:15.960024 | orchestrator | + dns_assignment = (known after apply) 2026-04-06 01:39:15.960029 | orchestrator | + dns_name = (known after apply) 2026-04-06 01:39:15.960033 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.960037 | orchestrator | + mac_address = (known after apply) 2026-04-06 01:39:15.960041 | orchestrator | + network_id = (known after apply) 2026-04-06 01:39:15.960046 | orchestrator | + port_security_enabled = (known after apply) 2026-04-06 01:39:15.960050 | orchestrator | + qos_policy_id = (known after apply) 2026-04-06 01:39:15.960064 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.960069 | orchestrator | + security_group_ids = (known after apply) 2026-04-06 01:39:15.960073 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.960078 | orchestrator | 2026-04-06 01:39:15.960083 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.960091 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-06 01:39:15.960099 | orchestrator | } 2026-04-06 01:39:15.960107 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.960114 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-06 01:39:15.960120 | orchestrator | } 2026-04-06 01:39:15.960127 | orchestrator | + allowed_address_pairs { 2026-04-06 01:39:15.960135 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-06 01:39:15.960143 | orchestrator | } 2026-04-06 01:39:15.960150 | orchestrator | 2026-04-06 01:39:15.960163 | orchestrator | + binding (known after apply) 2026-04-06 01:39:15.960168 | orchestrator | 2026-04-06 01:39:15.960173 | orchestrator | + fixed_ip { 2026-04-06 01:39:15.960177 | orchestrator | + ip_address = "192.168.16.15" 2026-04-06 01:39:15.960181 | orchestrator | + subnet_id = (known after apply) 2026-04-06 01:39:15.960186 | orchestrator | } 2026-04-06 01:39:15.960190 | orchestrator | } 2026-04-06 01:39:15.960194 | orchestrator | 2026-04-06 01:39:15.960198 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-06 01:39:15.960202 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-06 01:39:15.960206 | orchestrator | + force_destroy = false 2026-04-06 01:39:15.960210 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.960215 | orchestrator | + port_id = (known after apply) 2026-04-06 01:39:15.960220 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.960224 | orchestrator | + router_id = (known after apply) 2026-04-06 01:39:15.960228 | orchestrator | + subnet_id = (known after apply) 2026-04-06 01:39:15.960233 | orchestrator | } 2026-04-06 01:39:15.960237 | orchestrator | 2026-04-06 01:39:15.960241 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-06 01:39:15.960248 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-06 01:39:15.960255 | orchestrator | + admin_state_up = (known after apply) 2026-04-06 01:39:15.960261 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.960268 | orchestrator | + availability_zone_hints = [ 2026-04-06 01:39:15.960276 | orchestrator | + "nova", 2026-04-06 01:39:15.960282 | orchestrator | ] 2026-04-06 01:39:15.960289 | orchestrator | + distributed = (known after apply) 2026-04-06 01:39:15.960296 | orchestrator | + enable_snat = (known after apply) 2026-04-06 01:39:15.960304 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-06 01:39:15.960311 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-06 01:39:15.960318 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.960325 | orchestrator | + name = "testbed" 2026-04-06 01:39:15.960333 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.960338 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.960342 | orchestrator | 2026-04-06 01:39:15.960346 | orchestrator | + external_fixed_ip (known after apply) 2026-04-06 01:39:15.960352 | orchestrator | } 2026-04-06 01:39:15.960360 | orchestrator | 2026-04-06 01:39:15.960367 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-06 01:39:15.960375 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-06 01:39:15.960382 | orchestrator | + description = "ssh" 2026-04-06 01:39:15.960389 | orchestrator | + direction = "ingress" 2026-04-06 01:39:15.960396 | orchestrator | + ethertype = "IPv4" 2026-04-06 01:39:15.960404 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.960411 | orchestrator | + port_range_max = 22 2026-04-06 01:39:15.960418 | orchestrator | + port_range_min = 22 2026-04-06 01:39:15.960425 | orchestrator | + protocol = "tcp" 2026-04-06 01:39:15.960432 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.960444 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-06 01:39:15.960453 | orchestrator | + remote_group_id = (known after apply) 2026-04-06 01:39:15.960460 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-06 01:39:15.960467 | orchestrator | + security_group_id = (known after apply) 2026-04-06 01:39:15.960474 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.960482 | orchestrator | } 2026-04-06 01:39:15.960489 | orchestrator | 2026-04-06 01:39:15.960496 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-06 01:39:15.960503 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-06 01:39:15.960511 | orchestrator | + description = "wireguard" 2026-04-06 01:39:15.960518 | orchestrator | + direction = "ingress" 2026-04-06 01:39:15.960524 | orchestrator | + ethertype = "IPv4" 2026-04-06 01:39:15.960532 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.960538 | orchestrator | + port_range_max = 51820 2026-04-06 01:39:15.960545 | orchestrator | + port_range_min = 51820 2026-04-06 01:39:15.960553 | orchestrator | + protocol = "udp" 2026-04-06 01:39:15.960560 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.960567 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-06 01:39:15.960574 | orchestrator | + remote_group_id = (known after apply) 2026-04-06 01:39:15.960582 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-06 01:39:15.960589 | orchestrator | + security_group_id = (known after apply) 2026-04-06 01:39:15.960596 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.960603 | orchestrator | } 2026-04-06 01:39:15.960610 | orchestrator | 2026-04-06 01:39:15.960617 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-06 01:39:15.960625 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-06 01:39:15.960631 | orchestrator | + direction = "ingress" 2026-04-06 01:39:15.960638 | orchestrator | + ethertype = "IPv4" 2026-04-06 01:39:15.960645 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.960652 | orchestrator | + protocol = "tcp" 2026-04-06 01:39:15.960659 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.960666 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-06 01:39:15.960674 | orchestrator | + remote_group_id = (known after apply) 2026-04-06 01:39:15.960698 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-06 01:39:15.960706 | orchestrator | + security_group_id = (known after apply) 2026-04-06 01:39:15.960713 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.960722 | orchestrator | } 2026-04-06 01:39:15.960729 | orchestrator | 2026-04-06 01:39:15.960743 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-06 01:39:15.960751 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-06 01:39:15.960759 | orchestrator | + direction = "ingress" 2026-04-06 01:39:15.960766 | orchestrator | + ethertype = "IPv4" 2026-04-06 01:39:15.960773 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.960781 | orchestrator | + protocol = "udp" 2026-04-06 01:39:15.960789 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.960796 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-06 01:39:15.960804 | orchestrator | + remote_group_id = (known after apply) 2026-04-06 01:39:15.960812 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-06 01:39:15.960819 | orchestrator | + security_group_id = (known after apply) 2026-04-06 01:39:15.960827 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.960835 | orchestrator | } 2026-04-06 01:39:15.960843 | orchestrator | 2026-04-06 01:39:15.960850 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-06 01:39:15.960865 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-06 01:39:15.960873 | orchestrator | + direction = "ingress" 2026-04-06 01:39:15.960881 | orchestrator | + ethertype = "IPv4" 2026-04-06 01:39:15.960888 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.960895 | orchestrator | + protocol = "icmp" 2026-04-06 01:39:15.960903 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.960911 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-06 01:39:15.960919 | orchestrator | + remote_group_id = (known after apply) 2026-04-06 01:39:15.960927 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-06 01:39:15.960934 | orchestrator | + security_group_id = (known after apply) 2026-04-06 01:39:15.960941 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.960949 | orchestrator | } 2026-04-06 01:39:15.960956 | orchestrator | 2026-04-06 01:39:15.960964 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-06 01:39:15.960971 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-06 01:39:15.960979 | orchestrator | + direction = "ingress" 2026-04-06 01:39:15.960987 | orchestrator | + ethertype = "IPv4" 2026-04-06 01:39:15.960995 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.961003 | orchestrator | + protocol = "tcp" 2026-04-06 01:39:15.961011 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.961019 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-06 01:39:15.961031 | orchestrator | + remote_group_id = (known after apply) 2026-04-06 01:39:15.961039 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-06 01:39:15.961047 | orchestrator | + security_group_id = (known after apply) 2026-04-06 01:39:15.961055 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.961063 | orchestrator | } 2026-04-06 01:39:15.961071 | orchestrator | 2026-04-06 01:39:15.961079 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-06 01:39:15.961087 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-06 01:39:15.961095 | orchestrator | + direction = "ingress" 2026-04-06 01:39:15.961102 | orchestrator | + ethertype = "IPv4" 2026-04-06 01:39:15.961110 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.961118 | orchestrator | + protocol = "udp" 2026-04-06 01:39:15.961125 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.961133 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-06 01:39:15.961141 | orchestrator | + remote_group_id = (known after apply) 2026-04-06 01:39:15.961149 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-06 01:39:15.961157 | orchestrator | + security_group_id = (known after apply) 2026-04-06 01:39:15.961164 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.961171 | orchestrator | } 2026-04-06 01:39:15.961179 | orchestrator | 2026-04-06 01:39:15.961186 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-06 01:39:15.961193 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-06 01:39:15.961201 | orchestrator | + direction = "ingress" 2026-04-06 01:39:15.961212 | orchestrator | + ethertype = "IPv4" 2026-04-06 01:39:15.961219 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.961227 | orchestrator | + protocol = "icmp" 2026-04-06 01:39:15.961234 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.961242 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-06 01:39:15.961249 | orchestrator | + remote_group_id = (known after apply) 2026-04-06 01:39:15.961257 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-06 01:39:15.961264 | orchestrator | + security_group_id = (known after apply) 2026-04-06 01:39:15.961272 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.961290 | orchestrator | } 2026-04-06 01:39:15.961297 | orchestrator | 2026-04-06 01:39:15.961304 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-06 01:39:15.961312 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-06 01:39:15.961320 | orchestrator | + description = "vrrp" 2026-04-06 01:39:15.961327 | orchestrator | + direction = "ingress" 2026-04-06 01:39:15.961335 | orchestrator | + ethertype = "IPv4" 2026-04-06 01:39:15.961342 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.961350 | orchestrator | + protocol = "112" 2026-04-06 01:39:15.961357 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.961364 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-06 01:39:15.961371 | orchestrator | + remote_group_id = (known after apply) 2026-04-06 01:39:15.961379 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-06 01:39:15.961386 | orchestrator | + security_group_id = (known after apply) 2026-04-06 01:39:15.961394 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.961402 | orchestrator | } 2026-04-06 01:39:15.961409 | orchestrator | 2026-04-06 01:39:15.961424 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-06 01:39:15.961432 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-06 01:39:15.961440 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.961448 | orchestrator | + description = "management security group" 2026-04-06 01:39:15.961455 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.961463 | orchestrator | + name = "testbed-management" 2026-04-06 01:39:15.961470 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.961478 | orchestrator | + stateful = (known after apply) 2026-04-06 01:39:15.961485 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.961493 | orchestrator | } 2026-04-06 01:39:15.961500 | orchestrator | 2026-04-06 01:39:15.961508 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-06 01:39:15.961515 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-06 01:39:15.961523 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.961531 | orchestrator | + description = "node security group" 2026-04-06 01:39:15.961539 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.961546 | orchestrator | + name = "testbed-node" 2026-04-06 01:39:15.961554 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.961561 | orchestrator | + stateful = (known after apply) 2026-04-06 01:39:15.961568 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.961573 | orchestrator | } 2026-04-06 01:39:15.961577 | orchestrator | 2026-04-06 01:39:15.961581 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-06 01:39:15.961585 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-06 01:39:15.961589 | orchestrator | + all_tags = (known after apply) 2026-04-06 01:39:15.961594 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-06 01:39:15.961598 | orchestrator | + dns_nameservers = [ 2026-04-06 01:39:15.961603 | orchestrator | + "8.8.8.8", 2026-04-06 01:39:15.961607 | orchestrator | + "9.9.9.9", 2026-04-06 01:39:15.961611 | orchestrator | ] 2026-04-06 01:39:15.961615 | orchestrator | + enable_dhcp = true 2026-04-06 01:39:15.961620 | orchestrator | + gateway_ip = (known after apply) 2026-04-06 01:39:15.961624 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.961628 | orchestrator | + ip_version = 4 2026-04-06 01:39:15.961633 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-06 01:39:15.961637 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-06 01:39:15.961641 | orchestrator | + name = "subnet-testbed-management" 2026-04-06 01:39:15.961646 | orchestrator | + network_id = (known after apply) 2026-04-06 01:39:15.961650 | orchestrator | + no_gateway = false 2026-04-06 01:39:15.961654 | orchestrator | + region = (known after apply) 2026-04-06 01:39:15.961658 | orchestrator | + service_types = (known after apply) 2026-04-06 01:39:15.961668 | orchestrator | + tenant_id = (known after apply) 2026-04-06 01:39:15.961672 | orchestrator | 2026-04-06 01:39:15.961676 | orchestrator | + allocation_pool { 2026-04-06 01:39:15.961699 | orchestrator | + end = "192.168.31.250" 2026-04-06 01:39:15.961705 | orchestrator | + start = "192.168.31.200" 2026-04-06 01:39:15.961712 | orchestrator | } 2026-04-06 01:39:15.961719 | orchestrator | } 2026-04-06 01:39:15.961727 | orchestrator | 2026-04-06 01:39:15.961733 | orchestrator | # terraform_data.image will be created 2026-04-06 01:39:15.961740 | orchestrator | + resource "terraform_data" "image" { 2026-04-06 01:39:15.961747 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.961752 | orchestrator | + input = "Ubuntu 24.04" 2026-04-06 01:39:15.961756 | orchestrator | + output = (known after apply) 2026-04-06 01:39:15.961760 | orchestrator | } 2026-04-06 01:39:15.961764 | orchestrator | 2026-04-06 01:39:15.961768 | orchestrator | # terraform_data.image_node will be created 2026-04-06 01:39:15.961773 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-06 01:39:15.961777 | orchestrator | + id = (known after apply) 2026-04-06 01:39:15.961781 | orchestrator | + input = "Ubuntu 24.04" 2026-04-06 01:39:15.961785 | orchestrator | + output = (known after apply) 2026-04-06 01:39:15.961789 | orchestrator | } 2026-04-06 01:39:15.961794 | orchestrator | 2026-04-06 01:39:15.961798 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-06 01:39:15.961802 | orchestrator | 2026-04-06 01:39:15.961806 | orchestrator | Changes to Outputs: 2026-04-06 01:39:15.961810 | orchestrator | + manager_address = (sensitive value) 2026-04-06 01:39:15.961815 | orchestrator | + private_key = (sensitive value) 2026-04-06 01:39:16.034229 | orchestrator | terraform_data.image: Creating... 2026-04-06 01:39:16.034525 | orchestrator | terraform_data.image: Creation complete after 0s [id=15b6367a-3ba6-5644-8aab-4588d2a01801] 2026-04-06 01:39:16.175128 | orchestrator | terraform_data.image_node: Creating... 2026-04-06 01:39:16.175227 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=e55c8b0e-79eb-5d4d-0cf1-5d59e28a2fc0] 2026-04-06 01:39:16.192064 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-06 01:39:16.192826 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-06 01:39:16.198674 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-06 01:39:16.199538 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-06 01:39:16.202643 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-06 01:39:16.203627 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-06 01:39:16.208341 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-06 01:39:16.208382 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-06 01:39:16.208387 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-06 01:39:16.208392 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-06 01:39:16.640191 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-06 01:39:16.646046 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-06 01:39:16.648480 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-06 01:39:16.650628 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-06 01:39:16.669268 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-04-06 01:39:16.674407 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-06 01:39:17.198376 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=bb7ca794-a370-41e1-b0d0-82d123e16a05] 2026-04-06 01:39:17.207461 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-06 01:39:19.801587 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=d180ec14-e159-4180-82cb-d01a3342930c] 2026-04-06 01:39:19.810202 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-06 01:39:19.825388 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=5872ea60-fe11-4979-bb27-b05f1cf0a527] 2026-04-06 01:39:19.829607 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-06 01:39:19.850047 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=43e26771-fa08-421b-85bd-bea5ed7d9f4d] 2026-04-06 01:39:19.850151 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=f369a6c0-cc6b-402f-8203-4a676105f554] 2026-04-06 01:39:19.859015 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-06 01:39:19.864959 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-06 01:39:19.873194 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=8498d812-c1b1-46ed-92c2-ee1d1b35b15c] 2026-04-06 01:39:19.873473 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=48ce9836-bd13-434e-b336-3f85c4684867] 2026-04-06 01:39:19.880558 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-06 01:39:19.880817 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-06 01:39:19.930755 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=4a868051-6760-4c3b-ae8b-ad951cf235de] 2026-04-06 01:39:19.948582 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-06 01:39:19.953105 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=d211c721fb826086c4d68fcacefd34b8d60304be] 2026-04-06 01:39:19.959906 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-06 01:39:19.963355 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=c573d5c69a4fb99903d2d10ee821e3d4e6a165fe] 2026-04-06 01:39:19.963805 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=c3f554c9-cd3a-426a-b9ad-0bd91481d9b0] 2026-04-06 01:39:19.973662 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-06 01:39:20.140141 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=71f71275-aa74-4331-91d6-c9a393376103] 2026-04-06 01:39:20.553592 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c] 2026-04-06 01:39:20.897702 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=97f7a00f-4a08-44f9-8234-38a765c347a5] 2026-04-06 01:39:20.906480 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-06 01:39:23.202517 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=9d494db8-bac9-4b6a-86f1-1860f22fc6aa] 2026-04-06 01:39:23.249098 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=d99642af-b055-4abf-9556-6a3108e513b8] 2026-04-06 01:39:23.251325 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=a86fd0c9-311f-45be-821d-b1ac3da783a1] 2026-04-06 01:39:23.277278 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=a48c2299-66c1-490a-8d0b-fe346fc666cd] 2026-04-06 01:39:23.305555 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=23f8d4f9-bada-4d0a-9690-8d695318e058] 2026-04-06 01:39:23.336997 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=40f67feb-ef43-49bb-8f67-9921a7107336] 2026-04-06 01:39:23.887493 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=c1d19c7c-400a-4951-b795-626e829143d2] 2026-04-06 01:39:23.893934 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-06 01:39:23.894615 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-06 01:39:23.898075 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-06 01:39:24.105833 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=3eb72d5b-460e-448e-9eaa-d44672db27ce] 2026-04-06 01:39:24.122807 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-06 01:39:24.123179 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-06 01:39:24.124299 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-06 01:39:24.124410 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-06 01:39:24.124457 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-06 01:39:24.127733 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-06 01:39:24.176399 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=8a84bc16-4c69-4310-9c0d-0dec03f78446] 2026-04-06 01:39:24.181727 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-06 01:39:24.181808 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-06 01:39:24.186366 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-06 01:39:24.268892 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=baef4a72-add1-4a2f-8dba-0d6a26291b31] 2026-04-06 01:39:24.284546 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-06 01:39:24.361746 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=5bcd4365-edc4-4797-8a01-fdd71d34aa93] 2026-04-06 01:39:24.380729 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-06 01:39:24.456766 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=3ef1453c-574b-4c7b-b0a6-fa5f0322af74] 2026-04-06 01:39:24.474193 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-06 01:39:24.533304 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=19405fdc-5e2f-4c06-9f6d-c846b6d4cdfd] 2026-04-06 01:39:24.549564 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-06 01:39:24.623480 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=0ec38894-05f7-4d7c-9d2e-5bf24ff2c5d6] 2026-04-06 01:39:24.633516 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-06 01:39:24.725542 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=925b9b7c-9105-4a6c-94bf-a1ffac5b04d7] 2026-04-06 01:39:24.740872 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-06 01:39:24.793999 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=6cadb289-5c19-4727-8b26-42aafe810a32] 2026-04-06 01:39:24.802242 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-06 01:39:25.021618 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=0a0270dd-bec6-4619-ad34-e70b031acd57] 2026-04-06 01:39:25.163719 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=195d9181-adf2-42da-813a-d4d97ccd2842] 2026-04-06 01:39:25.206853 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=3a4c1d12-6b0d-4343-87b8-94ae707d8896] 2026-04-06 01:39:25.285759 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=2763b87c-5281-4e00-ba53-76590b12e813] 2026-04-06 01:39:25.297502 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=fc1736d2-a17d-4dd3-a1e9-05946beea953] 2026-04-06 01:39:25.298877 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=3285bed2-cec4-4ccb-91c4-fb3e80a0e3a0] 2026-04-06 01:39:25.399946 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=6635e5ac-c7e6-44be-99c8-363b9d2ef633] 2026-04-06 01:39:25.487744 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=1ca5da80-3dc2-4235-bbb4-bafbe41c7881] 2026-04-06 01:39:25.804928 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=e88d74bf-1721-4cba-962f-60943799dbc5] 2026-04-06 01:39:26.513111 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=f390c9a0-2093-4c1d-8c9f-426a0ff11ccc] 2026-04-06 01:39:26.534809 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-06 01:39:26.551876 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-06 01:39:26.551972 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-06 01:39:26.559256 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-06 01:39:26.559765 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-06 01:39:26.569864 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-06 01:39:26.571348 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-06 01:39:28.164840 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=26e52e6b-fc4f-4587-82c2-eb497b1a8980] 2026-04-06 01:39:28.172120 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-06 01:39:28.183071 | orchestrator | local_file.inventory: Creating... 2026-04-06 01:39:28.185723 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-06 01:39:28.188994 | orchestrator | local_file.inventory: Creation complete after 0s [id=773df4a3687ff1c9c6816d337a8266726771a70a] 2026-04-06 01:39:28.190548 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=d1577da81b0d28aaa2e83b2e6abf42c341616205] 2026-04-06 01:39:29.021676 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=26e52e6b-fc4f-4587-82c2-eb497b1a8980] 2026-04-06 01:39:36.552586 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-06 01:39:36.552707 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-06 01:39:36.568930 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-06 01:39:36.569010 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-06 01:39:36.572168 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-06 01:39:36.573396 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-06 01:39:46.558427 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-06 01:39:46.558595 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-06 01:39:46.569882 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-06 01:39:46.569961 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-06 01:39:46.573099 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-06 01:39:46.574289 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-06 01:39:47.011127 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=58ad74e2-94a3-4156-b1fb-52c8f352340c] 2026-04-06 01:39:47.022134 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=749a8a46-e810-4626-967a-96a55bc87d48] 2026-04-06 01:39:47.066802 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=2e31c03e-ef3a-4047-9dd7-b362f808ee78] 2026-04-06 01:39:47.131717 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=dd790b9b-63f4-412f-b89f-63a5efc92c97] 2026-04-06 01:39:56.559014 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-06 01:39:56.570951 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-06 01:39:57.412626 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=cf4e827d-c2d7-4cba-bf17-a11abedc8083] 2026-04-06 01:39:57.525056 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=ba065d88-73b1-4b3f-9f9b-34a43773062a] 2026-04-06 01:39:57.533789 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-06 01:39:57.549661 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=345241112485702370] 2026-04-06 01:39:57.552127 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-06 01:39:57.552465 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-06 01:39:57.553130 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-06 01:39:57.553197 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-06 01:39:57.553477 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-06 01:39:57.558126 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-06 01:39:57.564395 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-06 01:39:57.568653 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-06 01:39:57.574055 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-06 01:39:57.592598 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-06 01:40:00.950122 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=dd790b9b-63f4-412f-b89f-63a5efc92c97/71f71275-aa74-4331-91d6-c9a393376103] 2026-04-06 01:40:00.976853 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=749a8a46-e810-4626-967a-96a55bc87d48/4a868051-6760-4c3b-ae8b-ad951cf235de] 2026-04-06 01:40:00.991326 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=58ad74e2-94a3-4156-b1fb-52c8f352340c/d180ec14-e159-4180-82cb-d01a3342930c] 2026-04-06 01:40:01.025257 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=58ad74e2-94a3-4156-b1fb-52c8f352340c/c3f554c9-cd3a-426a-b9ad-0bd91481d9b0] 2026-04-06 01:40:01.025368 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=dd790b9b-63f4-412f-b89f-63a5efc92c97/8498d812-c1b1-46ed-92c2-ee1d1b35b15c] 2026-04-06 01:40:01.051221 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=749a8a46-e810-4626-967a-96a55bc87d48/48ce9836-bd13-434e-b336-3f85c4684867] 2026-04-06 01:40:07.110196 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=58ad74e2-94a3-4156-b1fb-52c8f352340c/43e26771-fa08-421b-85bd-bea5ed7d9f4d] 2026-04-06 01:40:07.127759 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=dd790b9b-63f4-412f-b89f-63a5efc92c97/5872ea60-fe11-4979-bb27-b05f1cf0a527] 2026-04-06 01:40:07.138121 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=749a8a46-e810-4626-967a-96a55bc87d48/f369a6c0-cc6b-402f-8203-4a676105f554] 2026-04-06 01:40:07.577466 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-06 01:40:17.577980 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-06 01:40:18.538846 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=f51a169e-0fbd-4f02-ae95-5215b6f83708] 2026-04-06 01:40:18.559851 | orchestrator | 2026-04-06 01:40:18.559937 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-06 01:40:18.559953 | orchestrator | 2026-04-06 01:40:18.559964 | orchestrator | Outputs: 2026-04-06 01:40:18.559974 | orchestrator | 2026-04-06 01:40:18.559984 | orchestrator | manager_address = 2026-04-06 01:40:18.559994 | orchestrator | private_key = 2026-04-06 01:40:18.853062 | orchestrator | ok: Runtime: 0:01:07.835492 2026-04-06 01:40:18.894046 | 2026-04-06 01:40:18.894272 | TASK [Fetch manager address] 2026-04-06 01:40:19.462582 | orchestrator | ok 2026-04-06 01:40:19.473079 | 2026-04-06 01:40:19.473205 | TASK [Set manager_host address] 2026-04-06 01:40:19.561469 | orchestrator | ok 2026-04-06 01:40:19.571888 | 2026-04-06 01:40:19.572009 | LOOP [Update ansible collections] 2026-04-06 01:40:21.585905 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-06 01:40:21.586526 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-06 01:40:21.586640 | orchestrator | Starting galaxy collection install process 2026-04-06 01:40:21.586694 | orchestrator | Process install dependency map 2026-04-06 01:40:21.586743 | orchestrator | Starting collection install process 2026-04-06 01:40:21.586789 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-04-06 01:40:21.586865 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-04-06 01:40:21.586919 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-06 01:40:21.587009 | orchestrator | ok: Item: commons Runtime: 0:00:01.640176 2026-04-06 01:40:22.738710 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-06 01:40:22.738905 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-06 01:40:22.738962 | orchestrator | Starting galaxy collection install process 2026-04-06 01:40:22.739002 | orchestrator | Process install dependency map 2026-04-06 01:40:22.739039 | orchestrator | Starting collection install process 2026-04-06 01:40:22.739074 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-04-06 01:40:22.739109 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-04-06 01:40:22.739142 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-06 01:40:22.739196 | orchestrator | ok: Item: services Runtime: 0:00:00.838247 2026-04-06 01:40:22.762762 | 2026-04-06 01:40:22.762984 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-06 01:40:33.370560 | orchestrator | ok 2026-04-06 01:40:33.380870 | 2026-04-06 01:40:33.380990 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-06 01:41:33.425814 | orchestrator | ok 2026-04-06 01:41:33.436824 | 2026-04-06 01:41:33.436976 | TASK [Fetch manager ssh hostkey] 2026-04-06 01:41:35.010180 | orchestrator | Output suppressed because no_log was given 2026-04-06 01:41:35.024836 | 2026-04-06 01:41:35.025009 | TASK [Get ssh keypair from terraform environment] 2026-04-06 01:41:35.561124 | orchestrator | ok: Runtime: 0:00:00.010204 2026-04-06 01:41:35.578082 | 2026-04-06 01:41:35.578236 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-06 01:41:35.617478 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-06 01:41:35.627933 | 2026-04-06 01:41:35.628058 | TASK [Run manager part 0] 2026-04-06 01:41:36.764313 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-06 01:41:36.851951 | orchestrator | 2026-04-06 01:41:36.852016 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-06 01:41:36.852027 | orchestrator | 2026-04-06 01:41:36.852046 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-06 01:41:39.025580 | orchestrator | ok: [testbed-manager] 2026-04-06 01:41:39.025641 | orchestrator | 2026-04-06 01:41:39.025667 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-06 01:41:39.025677 | orchestrator | 2026-04-06 01:41:39.025714 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 01:41:41.212313 | orchestrator | ok: [testbed-manager] 2026-04-06 01:41:41.212377 | orchestrator | 2026-04-06 01:41:41.212385 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-06 01:41:41.908313 | orchestrator | ok: [testbed-manager] 2026-04-06 01:41:41.908392 | orchestrator | 2026-04-06 01:41:41.908402 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-06 01:41:41.972297 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:41:41.972349 | orchestrator | 2026-04-06 01:41:41.972359 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-06 01:41:42.010064 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:41:42.010113 | orchestrator | 2026-04-06 01:41:42.010120 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-06 01:41:42.045908 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:41:42.045956 | orchestrator | 2026-04-06 01:41:42.045962 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-06 01:41:42.877673 | orchestrator | changed: [testbed-manager] 2026-04-06 01:41:42.877753 | orchestrator | 2026-04-06 01:41:42.877765 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-06 01:44:43.306770 | orchestrator | changed: [testbed-manager] 2026-04-06 01:44:43.306838 | orchestrator | 2026-04-06 01:44:43.306850 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-06 01:46:08.612916 | orchestrator | changed: [testbed-manager] 2026-04-06 01:46:08.612994 | orchestrator | 2026-04-06 01:46:08.613015 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-06 01:46:34.478783 | orchestrator | changed: [testbed-manager] 2026-04-06 01:46:34.478873 | orchestrator | 2026-04-06 01:46:34.478916 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-06 01:46:44.783219 | orchestrator | changed: [testbed-manager] 2026-04-06 01:46:44.783264 | orchestrator | 2026-04-06 01:46:44.783272 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-06 01:46:44.846323 | orchestrator | ok: [testbed-manager] 2026-04-06 01:46:44.846364 | orchestrator | 2026-04-06 01:46:44.846375 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-06 01:46:45.712372 | orchestrator | ok: [testbed-manager] 2026-04-06 01:46:45.712441 | orchestrator | 2026-04-06 01:46:45.712450 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-06 01:46:46.566562 | orchestrator | changed: [testbed-manager] 2026-04-06 01:46:46.566658 | orchestrator | 2026-04-06 01:46:46.566678 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-06 01:46:53.702806 | orchestrator | changed: [testbed-manager] 2026-04-06 01:46:53.702898 | orchestrator | 2026-04-06 01:46:53.702931 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-06 01:47:00.394825 | orchestrator | changed: [testbed-manager] 2026-04-06 01:47:00.394872 | orchestrator | 2026-04-06 01:47:00.394883 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-06 01:47:03.541375 | orchestrator | changed: [testbed-manager] 2026-04-06 01:47:03.541475 | orchestrator | 2026-04-06 01:47:03.541493 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-06 01:47:05.584001 | orchestrator | changed: [testbed-manager] 2026-04-06 01:47:05.584098 | orchestrator | 2026-04-06 01:47:05.584116 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-06 01:47:06.783388 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-06 01:47:06.783470 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-06 01:47:06.783482 | orchestrator | 2026-04-06 01:47:06.783495 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-06 01:47:06.834522 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-06 01:47:06.834627 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-06 01:47:06.834652 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-06 01:47:06.834677 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-06 01:47:12.563221 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-06 01:47:12.563270 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-06 01:47:12.563279 | orchestrator | 2026-04-06 01:47:12.563287 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-06 01:47:13.203622 | orchestrator | changed: [testbed-manager] 2026-04-06 01:47:13.203732 | orchestrator | 2026-04-06 01:47:13.203750 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-06 01:48:35.542284 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-06 01:48:35.542333 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-06 01:48:35.542340 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-06 01:48:35.542347 | orchestrator | 2026-04-06 01:48:35.542353 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-06 01:48:38.022161 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-06 01:48:38.022248 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-06 01:48:38.022263 | orchestrator | 2026-04-06 01:48:38.022279 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-06 01:48:38.022291 | orchestrator | 2026-04-06 01:48:38.022301 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 01:48:39.554293 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:39.554387 | orchestrator | 2026-04-06 01:48:39.554404 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-06 01:48:39.597491 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:39.597597 | orchestrator | 2026-04-06 01:48:39.597624 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-06 01:48:39.672010 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:39.672090 | orchestrator | 2026-04-06 01:48:39.672098 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-06 01:48:40.535320 | orchestrator | changed: [testbed-manager] 2026-04-06 01:48:40.535365 | orchestrator | 2026-04-06 01:48:40.535375 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-06 01:48:41.301451 | orchestrator | changed: [testbed-manager] 2026-04-06 01:48:41.301494 | orchestrator | 2026-04-06 01:48:41.301502 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-06 01:48:42.781312 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-06 01:48:42.781391 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-06 01:48:42.781402 | orchestrator | 2026-04-06 01:48:42.781412 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-06 01:48:44.312467 | orchestrator | changed: [testbed-manager] 2026-04-06 01:48:44.312584 | orchestrator | 2026-04-06 01:48:44.312615 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-06 01:48:46.152385 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-06 01:48:46.152427 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-06 01:48:46.152442 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-06 01:48:46.152449 | orchestrator | 2026-04-06 01:48:46.152457 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-06 01:48:46.215220 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:48:46.215260 | orchestrator | 2026-04-06 01:48:46.215268 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-06 01:48:46.287520 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:48:46.287560 | orchestrator | 2026-04-06 01:48:46.287567 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-06 01:48:46.874691 | orchestrator | changed: [testbed-manager] 2026-04-06 01:48:46.874777 | orchestrator | 2026-04-06 01:48:46.874792 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-06 01:48:46.939943 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:48:46.939983 | orchestrator | 2026-04-06 01:48:46.939990 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-06 01:48:47.888527 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-06 01:48:47.888575 | orchestrator | changed: [testbed-manager] 2026-04-06 01:48:47.888587 | orchestrator | 2026-04-06 01:48:47.888595 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-06 01:48:47.927648 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:48:47.927729 | orchestrator | 2026-04-06 01:48:47.927749 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-06 01:48:47.969194 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:48:47.969285 | orchestrator | 2026-04-06 01:48:47.969303 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-06 01:48:48.007329 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:48:48.007434 | orchestrator | 2026-04-06 01:48:48.007451 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-06 01:48:48.112092 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:48:48.112208 | orchestrator | 2026-04-06 01:48:48.112239 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-06 01:48:48.897001 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:48.897090 | orchestrator | 2026-04-06 01:48:48.897098 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-06 01:48:48.897103 | orchestrator | 2026-04-06 01:48:48.897109 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 01:48:50.368500 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:50.368570 | orchestrator | 2026-04-06 01:48:50.368580 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-06 01:48:51.370178 | orchestrator | changed: [testbed-manager] 2026-04-06 01:48:51.370277 | orchestrator | 2026-04-06 01:48:51.370304 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 01:48:51.370324 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-06 01:48:51.370361 | orchestrator | 2026-04-06 01:48:51.936839 | orchestrator | ok: Runtime: 0:07:15.504968 2026-04-06 01:48:51.946083 | 2026-04-06 01:48:51.946201 | TASK [Point out that the log in on the manager is now possible] 2026-04-06 01:48:51.976334 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-06 01:48:51.983181 | 2026-04-06 01:48:51.983398 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-06 01:48:52.012596 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-06 01:48:52.019790 | 2026-04-06 01:48:52.019920 | TASK [Run manager part 1 + 2] 2026-04-06 01:48:52.921127 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-06 01:48:52.980362 | orchestrator | 2026-04-06 01:48:52.980412 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-06 01:48:52.980419 | orchestrator | 2026-04-06 01:48:52.980432 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 01:48:56.127604 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:56.127789 | orchestrator | 2026-04-06 01:48:56.127860 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-06 01:48:56.160156 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:48:56.160217 | orchestrator | 2026-04-06 01:48:56.160230 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-06 01:48:56.198666 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:56.198723 | orchestrator | 2026-04-06 01:48:56.198731 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-06 01:48:56.233204 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:56.233260 | orchestrator | 2026-04-06 01:48:56.233268 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-06 01:48:56.313392 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:56.313489 | orchestrator | 2026-04-06 01:48:56.313506 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-06 01:48:56.386788 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:56.386846 | orchestrator | 2026-04-06 01:48:56.386853 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-06 01:48:56.431103 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-06 01:48:56.431183 | orchestrator | 2026-04-06 01:48:56.431195 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-06 01:48:57.174800 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:57.174895 | orchestrator | 2026-04-06 01:48:57.174914 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-06 01:48:57.228355 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:48:57.228432 | orchestrator | 2026-04-06 01:48:57.228442 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-06 01:48:58.668486 | orchestrator | changed: [testbed-manager] 2026-04-06 01:48:58.668573 | orchestrator | 2026-04-06 01:48:58.668587 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-06 01:48:59.303499 | orchestrator | ok: [testbed-manager] 2026-04-06 01:48:59.303608 | orchestrator | 2026-04-06 01:48:59.303627 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-06 01:49:00.556767 | orchestrator | changed: [testbed-manager] 2026-04-06 01:49:00.556939 | orchestrator | 2026-04-06 01:49:00.556960 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-06 01:49:17.425337 | orchestrator | changed: [testbed-manager] 2026-04-06 01:49:17.425401 | orchestrator | 2026-04-06 01:49:17.425411 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-06 01:49:18.178537 | orchestrator | ok: [testbed-manager] 2026-04-06 01:49:18.178611 | orchestrator | 2026-04-06 01:49:18.178624 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-06 01:49:18.236020 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:49:18.236103 | orchestrator | 2026-04-06 01:49:18.236120 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-06 01:49:19.222776 | orchestrator | changed: [testbed-manager] 2026-04-06 01:49:19.222880 | orchestrator | 2026-04-06 01:49:19.222901 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-06 01:49:20.263998 | orchestrator | changed: [testbed-manager] 2026-04-06 01:49:20.264042 | orchestrator | 2026-04-06 01:49:20.264051 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-06 01:49:20.802943 | orchestrator | changed: [testbed-manager] 2026-04-06 01:49:20.802980 | orchestrator | 2026-04-06 01:49:20.802986 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-06 01:49:20.840222 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-06 01:49:20.840292 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-06 01:49:20.840298 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-06 01:49:20.840303 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-06 01:49:23.153150 | orchestrator | changed: [testbed-manager] 2026-04-06 01:49:23.153191 | orchestrator | 2026-04-06 01:49:23.153197 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-06 01:49:32.696104 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-06 01:49:32.696213 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-06 01:49:32.696235 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-06 01:49:32.696253 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-06 01:49:32.696279 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-06 01:49:32.696294 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-06 01:49:32.696310 | orchestrator | 2026-04-06 01:49:32.696327 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-06 01:49:34.023396 | orchestrator | changed: [testbed-manager] 2026-04-06 01:49:34.023434 | orchestrator | 2026-04-06 01:49:34.023441 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-06 01:49:37.459833 | orchestrator | changed: [testbed-manager] 2026-04-06 01:49:37.459875 | orchestrator | 2026-04-06 01:49:37.459883 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-06 01:49:37.499232 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:49:37.499273 | orchestrator | 2026-04-06 01:49:37.499280 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-06 01:51:27.807711 | orchestrator | changed: [testbed-manager] 2026-04-06 01:51:27.807754 | orchestrator | 2026-04-06 01:51:27.807762 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-06 01:51:29.060493 | orchestrator | ok: [testbed-manager] 2026-04-06 01:51:29.060588 | orchestrator | 2026-04-06 01:51:29.060607 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 01:51:29.060621 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-06 01:51:29.060633 | orchestrator | 2026-04-06 01:51:29.660828 | orchestrator | ok: Runtime: 0:02:36.809236 2026-04-06 01:51:29.678667 | 2026-04-06 01:51:29.678814 | TASK [Reboot manager] 2026-04-06 01:51:31.214614 | orchestrator | ok: Runtime: 0:00:01.061805 2026-04-06 01:51:31.232946 | 2026-04-06 01:51:31.233115 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-06 01:51:47.858201 | orchestrator | ok 2026-04-06 01:51:47.866789 | 2026-04-06 01:51:47.866955 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-06 01:52:47.910101 | orchestrator | ok 2026-04-06 01:52:47.919664 | 2026-04-06 01:52:47.919782 | TASK [Deploy manager + bootstrap nodes] 2026-04-06 01:52:50.592512 | orchestrator | 2026-04-06 01:52:50.592757 | orchestrator | # DEPLOY MANAGER 2026-04-06 01:52:50.592786 | orchestrator | 2026-04-06 01:52:50.592801 | orchestrator | + set -e 2026-04-06 01:52:50.592814 | orchestrator | + echo 2026-04-06 01:52:50.592829 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-06 01:52:50.592847 | orchestrator | + echo 2026-04-06 01:52:50.592898 | orchestrator | + cat /opt/manager-vars.sh 2026-04-06 01:52:50.595384 | orchestrator | export NUMBER_OF_NODES=6 2026-04-06 01:52:50.595472 | orchestrator | 2026-04-06 01:52:50.595491 | orchestrator | export CEPH_VERSION=reef 2026-04-06 01:52:50.595574 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-06 01:52:50.595595 | orchestrator | export MANAGER_VERSION=9.5.0 2026-04-06 01:52:50.595633 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-06 01:52:50.595652 | orchestrator | 2026-04-06 01:52:50.595682 | orchestrator | export ARA=false 2026-04-06 01:52:50.595702 | orchestrator | export DEPLOY_MODE=manager 2026-04-06 01:52:50.595730 | orchestrator | export TEMPEST=false 2026-04-06 01:52:50.595752 | orchestrator | export IS_ZUUL=true 2026-04-06 01:52:50.595771 | orchestrator | 2026-04-06 01:52:50.595799 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 01:52:50.595820 | orchestrator | export EXTERNAL_API=false 2026-04-06 01:52:50.595840 | orchestrator | 2026-04-06 01:52:50.595860 | orchestrator | export IMAGE_USER=ubuntu 2026-04-06 01:52:50.595884 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-06 01:52:50.595903 | orchestrator | 2026-04-06 01:52:50.595923 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-06 01:52:50.595950 | orchestrator | 2026-04-06 01:52:50.595962 | orchestrator | + echo 2026-04-06 01:52:50.595980 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 01:52:50.596955 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 01:52:50.597067 | orchestrator | ++ INTERACTIVE=false 2026-04-06 01:52:50.597145 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 01:52:50.597221 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 01:52:50.597369 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 01:52:50.597383 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 01:52:50.597395 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 01:52:50.597407 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 01:52:50.597456 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 01:52:50.597475 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 01:52:50.597494 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 01:52:50.597512 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 01:52:50.597531 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 01:52:50.597550 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 01:52:50.597583 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 01:52:50.597603 | orchestrator | ++ export ARA=false 2026-04-06 01:52:50.597621 | orchestrator | ++ ARA=false 2026-04-06 01:52:50.597641 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 01:52:50.597660 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 01:52:50.597680 | orchestrator | ++ export TEMPEST=false 2026-04-06 01:52:50.597693 | orchestrator | ++ TEMPEST=false 2026-04-06 01:52:50.597703 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 01:52:50.597714 | orchestrator | ++ IS_ZUUL=true 2026-04-06 01:52:50.597725 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 01:52:50.597736 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 01:52:50.597747 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 01:52:50.597758 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 01:52:50.597768 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 01:52:50.597779 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 01:52:50.597790 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 01:52:50.597801 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 01:52:50.597812 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 01:52:50.597829 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 01:52:50.597840 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-06 01:52:50.651889 | orchestrator | + docker version 2026-04-06 01:52:50.778066 | orchestrator | Client: Docker Engine - Community 2026-04-06 01:52:50.778188 | orchestrator | Version: 27.5.1 2026-04-06 01:52:50.778205 | orchestrator | API version: 1.47 2026-04-06 01:52:50.778215 | orchestrator | Go version: go1.22.11 2026-04-06 01:52:50.778223 | orchestrator | Git commit: 9f9e405 2026-04-06 01:52:50.778231 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-06 01:52:50.778239 | orchestrator | OS/Arch: linux/amd64 2026-04-06 01:52:50.778247 | orchestrator | Context: default 2026-04-06 01:52:50.778254 | orchestrator | 2026-04-06 01:52:50.778263 | orchestrator | Server: Docker Engine - Community 2026-04-06 01:52:50.778270 | orchestrator | Engine: 2026-04-06 01:52:50.778278 | orchestrator | Version: 27.5.1 2026-04-06 01:52:50.778286 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-06 01:52:50.778318 | orchestrator | Go version: go1.22.11 2026-04-06 01:52:50.778325 | orchestrator | Git commit: 4c9b3b0 2026-04-06 01:52:50.778333 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-06 01:52:50.778340 | orchestrator | OS/Arch: linux/amd64 2026-04-06 01:52:50.778347 | orchestrator | Experimental: false 2026-04-06 01:52:50.778355 | orchestrator | containerd: 2026-04-06 01:52:50.778362 | orchestrator | Version: v2.2.2 2026-04-06 01:52:50.778370 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-06 01:52:50.778377 | orchestrator | runc: 2026-04-06 01:52:50.778385 | orchestrator | Version: 1.3.4 2026-04-06 01:52:50.778392 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-06 01:52:50.778399 | orchestrator | docker-init: 2026-04-06 01:52:50.778407 | orchestrator | Version: 0.19.0 2026-04-06 01:52:50.778415 | orchestrator | GitCommit: de40ad0 2026-04-06 01:52:50.780832 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-06 01:52:50.789792 | orchestrator | + set -e 2026-04-06 01:52:50.789860 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 01:52:50.789870 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 01:52:50.789881 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 01:52:50.789894 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 01:52:50.789906 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 01:52:50.789918 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 01:52:50.789930 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 01:52:50.789943 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 01:52:50.789957 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 01:52:50.789970 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 01:52:50.789983 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 01:52:50.789996 | orchestrator | ++ export ARA=false 2026-04-06 01:52:50.790006 | orchestrator | ++ ARA=false 2026-04-06 01:52:50.790014 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 01:52:50.790063 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 01:52:50.790071 | orchestrator | ++ export TEMPEST=false 2026-04-06 01:52:50.790079 | orchestrator | ++ TEMPEST=false 2026-04-06 01:52:50.790086 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 01:52:50.790094 | orchestrator | ++ IS_ZUUL=true 2026-04-06 01:52:50.790101 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 01:52:50.790109 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 01:52:50.790116 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 01:52:50.790124 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 01:52:50.790131 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 01:52:50.790138 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 01:52:50.790146 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 01:52:50.790153 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 01:52:50.790191 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 01:52:50.790205 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 01:52:50.790218 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 01:52:50.790229 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 01:52:50.790242 | orchestrator | ++ INTERACTIVE=false 2026-04-06 01:52:50.790254 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 01:52:50.790272 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 01:52:50.790295 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-06 01:52:50.790303 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-04-06 01:52:50.797055 | orchestrator | + set -e 2026-04-06 01:52:50.797118 | orchestrator | + VERSION=9.5.0 2026-04-06 01:52:50.797131 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-06 01:52:50.807701 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-06 01:52:50.807800 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-06 01:52:50.812735 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-06 01:52:50.815499 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-06 01:52:50.826716 | orchestrator | /opt/configuration ~ 2026-04-06 01:52:50.826797 | orchestrator | + set -e 2026-04-06 01:52:50.826810 | orchestrator | + pushd /opt/configuration 2026-04-06 01:52:50.826821 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-06 01:52:50.828801 | orchestrator | + source /opt/venv/bin/activate 2026-04-06 01:52:50.832239 | orchestrator | ++ deactivate nondestructive 2026-04-06 01:52:50.832313 | orchestrator | ++ '[' -n '' ']' 2026-04-06 01:52:50.832328 | orchestrator | ++ '[' -n '' ']' 2026-04-06 01:52:50.832363 | orchestrator | ++ hash -r 2026-04-06 01:52:50.832374 | orchestrator | ++ '[' -n '' ']' 2026-04-06 01:52:50.832384 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-06 01:52:50.832394 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-06 01:52:50.832403 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-06 01:52:50.832417 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-06 01:52:50.832434 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-06 01:52:50.832450 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-06 01:52:50.832465 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-06 01:52:50.832482 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 01:52:50.832496 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 01:52:50.832510 | orchestrator | ++ export PATH 2026-04-06 01:52:50.832528 | orchestrator | ++ '[' -n '' ']' 2026-04-06 01:52:50.832544 | orchestrator | ++ '[' -z '' ']' 2026-04-06 01:52:50.832562 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-06 01:52:50.832578 | orchestrator | ++ PS1='(venv) ' 2026-04-06 01:52:50.832592 | orchestrator | ++ export PS1 2026-04-06 01:52:50.832602 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-06 01:52:50.832612 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-06 01:52:50.832622 | orchestrator | ++ hash -r 2026-04-06 01:52:50.832631 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-06 01:52:52.235259 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-06 01:52:52.236124 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-06 01:52:52.237870 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-06 01:52:52.239371 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-06 01:52:52.241059 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-06 01:52:52.253037 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-06 01:52:52.254534 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-06 01:52:52.256085 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-06 01:52:52.257607 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-06 01:52:52.302947 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-06 01:52:52.304417 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-06 01:52:52.306534 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-06 01:52:52.307830 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-06 01:52:52.312331 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-06 01:52:52.574416 | orchestrator | ++ which gilt 2026-04-06 01:52:52.578839 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-06 01:52:52.578907 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-06 01:52:52.944818 | orchestrator | osism.cfg-generics: 2026-04-06 01:52:53.135729 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-06 01:52:53.135839 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-06 01:52:53.135870 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-06 01:52:53.135884 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-06 01:52:54.381295 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-06 01:52:54.397527 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-06 01:52:54.881561 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-06 01:52:54.936855 | orchestrator | ~ 2026-04-06 01:52:54.936958 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-06 01:52:54.936970 | orchestrator | + deactivate 2026-04-06 01:52:54.936978 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-06 01:52:54.936987 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 01:52:54.936993 | orchestrator | + export PATH 2026-04-06 01:52:54.937000 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-06 01:52:54.937007 | orchestrator | + '[' -n '' ']' 2026-04-06 01:52:54.937015 | orchestrator | + hash -r 2026-04-06 01:52:54.937022 | orchestrator | + '[' -n '' ']' 2026-04-06 01:52:54.937028 | orchestrator | + unset VIRTUAL_ENV 2026-04-06 01:52:54.937035 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-06 01:52:54.937041 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-06 01:52:54.937048 | orchestrator | + unset -f deactivate 2026-04-06 01:52:54.937054 | orchestrator | + popd 2026-04-06 01:52:54.938251 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-06 01:52:54.938285 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-06 01:52:54.939548 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-06 01:52:54.996871 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 01:52:54.996950 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-06 01:52:54.997644 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-06 01:52:55.064416 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-06 01:52:55.064518 | orchestrator | ++ semver 2024.2 2025.1 2026-04-06 01:52:55.128963 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-06 01:52:55.129070 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-06 01:52:55.227294 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-06 01:52:55.227467 | orchestrator | + source /opt/venv/bin/activate 2026-04-06 01:52:55.227556 | orchestrator | ++ deactivate nondestructive 2026-04-06 01:52:55.227572 | orchestrator | ++ '[' -n '' ']' 2026-04-06 01:52:55.227580 | orchestrator | ++ '[' -n '' ']' 2026-04-06 01:52:55.227592 | orchestrator | ++ hash -r 2026-04-06 01:52:55.227789 | orchestrator | ++ '[' -n '' ']' 2026-04-06 01:52:55.227863 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-06 01:52:55.227879 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-06 01:52:55.227895 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-06 01:52:55.228120 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-06 01:52:55.228142 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-06 01:52:55.228295 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-06 01:52:55.228331 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-06 01:52:55.228400 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 01:52:55.228722 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 01:52:55.228895 | orchestrator | ++ export PATH 2026-04-06 01:52:55.228913 | orchestrator | ++ '[' -n '' ']' 2026-04-06 01:52:55.228992 | orchestrator | ++ '[' -z '' ']' 2026-04-06 01:52:55.229006 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-06 01:52:55.229017 | orchestrator | ++ PS1='(venv) ' 2026-04-06 01:52:55.229029 | orchestrator | ++ export PS1 2026-04-06 01:52:55.229041 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-06 01:52:55.229056 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-06 01:52:55.229067 | orchestrator | ++ hash -r 2026-04-06 01:52:55.229415 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-06 01:52:56.592507 | orchestrator | 2026-04-06 01:52:56.592593 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-06 01:52:56.592604 | orchestrator | 2026-04-06 01:52:56.592612 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-06 01:52:57.283901 | orchestrator | ok: [testbed-manager] 2026-04-06 01:52:57.283986 | orchestrator | 2026-04-06 01:52:57.283994 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-06 01:52:58.372011 | orchestrator | changed: [testbed-manager] 2026-04-06 01:52:58.593306 | orchestrator | 2026-04-06 01:52:58.593375 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-06 01:52:58.593410 | orchestrator | 2026-04-06 01:52:58.593420 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 01:53:00.958287 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:00.958395 | orchestrator | 2026-04-06 01:53:00.958403 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-06 01:53:01.012053 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:01.012129 | orchestrator | 2026-04-06 01:53:01.012140 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-06 01:53:01.544721 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:01.544817 | orchestrator | 2026-04-06 01:53:01.544836 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-06 01:53:01.582108 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:53:01.582244 | orchestrator | 2026-04-06 01:53:01.582263 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-06 01:53:01.962804 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:01.962907 | orchestrator | 2026-04-06 01:53:01.962924 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-06 01:53:02.307483 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:02.307585 | orchestrator | 2026-04-06 01:53:02.307603 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-06 01:53:02.456798 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:53:02.456911 | orchestrator | 2026-04-06 01:53:02.456929 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-06 01:53:02.456941 | orchestrator | 2026-04-06 01:53:02.456952 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 01:53:04.370624 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:04.370718 | orchestrator | 2026-04-06 01:53:04.370734 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-06 01:53:04.513744 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-06 01:53:04.513819 | orchestrator | 2026-04-06 01:53:04.513827 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-06 01:53:04.577972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-06 01:53:04.578117 | orchestrator | 2026-04-06 01:53:04.578156 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-06 01:53:05.813722 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-06 01:53:05.813836 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-06 01:53:05.813858 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-06 01:53:05.813875 | orchestrator | 2026-04-06 01:53:05.813895 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-06 01:53:07.841447 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-06 01:53:07.841545 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-06 01:53:07.841555 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-06 01:53:07.841562 | orchestrator | 2026-04-06 01:53:07.841570 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-06 01:53:08.555504 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-06 01:53:08.555606 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:08.555623 | orchestrator | 2026-04-06 01:53:08.555636 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-06 01:53:09.266131 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-06 01:53:09.266267 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:09.266276 | orchestrator | 2026-04-06 01:53:09.266281 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-06 01:53:09.313490 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:53:09.313603 | orchestrator | 2026-04-06 01:53:09.313620 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-06 01:53:09.724955 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:09.725069 | orchestrator | 2026-04-06 01:53:09.725089 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-06 01:53:09.834716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-06 01:53:09.834817 | orchestrator | 2026-04-06 01:53:09.834834 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-06 01:53:11.128298 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:11.128402 | orchestrator | 2026-04-06 01:53:11.128417 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-06 01:53:12.121997 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:12.122137 | orchestrator | 2026-04-06 01:53:12.122150 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-06 01:53:22.626176 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:22.626354 | orchestrator | 2026-04-06 01:53:22.626366 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-06 01:53:22.681771 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:53:22.681868 | orchestrator | 2026-04-06 01:53:22.681906 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-06 01:53:22.681919 | orchestrator | 2026-04-06 01:53:22.681930 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 01:53:24.735798 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:24.735905 | orchestrator | 2026-04-06 01:53:24.735921 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-06 01:53:24.868065 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-06 01:53:24.868167 | orchestrator | 2026-04-06 01:53:24.868261 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-06 01:53:24.929263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-06 01:53:24.929391 | orchestrator | 2026-04-06 01:53:24.929422 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-06 01:53:27.720316 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:27.720482 | orchestrator | 2026-04-06 01:53:27.721308 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-06 01:53:27.779299 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:27.779423 | orchestrator | 2026-04-06 01:53:27.779435 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-06 01:53:27.921302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-06 01:53:27.921452 | orchestrator | 2026-04-06 01:53:27.921479 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-06 01:53:30.997694 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-06 01:53:30.997876 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-06 01:53:30.997894 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-06 01:53:30.997907 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-06 01:53:30.997919 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-06 01:53:30.997931 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-06 01:53:30.997942 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-06 01:53:30.997953 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-06 01:53:30.997965 | orchestrator | 2026-04-06 01:53:30.997978 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-06 01:53:31.700577 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:31.700690 | orchestrator | 2026-04-06 01:53:31.700706 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-06 01:53:32.391681 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:32.391791 | orchestrator | 2026-04-06 01:53:32.391802 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-06 01:53:32.473235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-06 01:53:32.473338 | orchestrator | 2026-04-06 01:53:32.473347 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-06 01:53:33.833069 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-06 01:53:33.833228 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-06 01:53:33.833247 | orchestrator | 2026-04-06 01:53:33.833261 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-06 01:53:34.530820 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:34.530968 | orchestrator | 2026-04-06 01:53:34.530989 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-06 01:53:34.591951 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:53:34.592102 | orchestrator | 2026-04-06 01:53:34.592124 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-06 01:53:34.693329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-06 01:53:34.693457 | orchestrator | 2026-04-06 01:53:34.693471 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-06 01:53:35.402854 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:35.402954 | orchestrator | 2026-04-06 01:53:35.402964 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-06 01:53:35.478725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-06 01:53:35.478853 | orchestrator | 2026-04-06 01:53:35.478872 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-06 01:53:36.973989 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-06 01:53:36.974180 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-06 01:53:36.974247 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:36.974258 | orchestrator | 2026-04-06 01:53:36.974267 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-06 01:53:37.635772 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:37.635913 | orchestrator | 2026-04-06 01:53:37.635933 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-06 01:53:37.697144 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:53:37.697279 | orchestrator | 2026-04-06 01:53:37.697291 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-06 01:53:37.794122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-06 01:53:37.794322 | orchestrator | 2026-04-06 01:53:37.794342 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-06 01:53:38.401944 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:38.402168 | orchestrator | 2026-04-06 01:53:38.402185 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-06 01:53:38.819894 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:38.820030 | orchestrator | 2026-04-06 01:53:38.820047 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-06 01:53:40.123311 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-06 01:53:40.123425 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-06 01:53:40.123438 | orchestrator | 2026-04-06 01:53:40.123451 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-06 01:53:40.842492 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:40.842624 | orchestrator | 2026-04-06 01:53:40.842640 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-06 01:53:41.303876 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:41.304008 | orchestrator | 2026-04-06 01:53:41.304023 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-06 01:53:41.681558 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:41.681693 | orchestrator | 2026-04-06 01:53:41.681710 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-06 01:53:41.733717 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:53:41.733798 | orchestrator | 2026-04-06 01:53:41.733805 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-06 01:53:41.816054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-06 01:53:41.816171 | orchestrator | 2026-04-06 01:53:41.816185 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-06 01:53:41.873694 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:41.873813 | orchestrator | 2026-04-06 01:53:41.873836 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-06 01:53:44.088477 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-06 01:53:44.088591 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-06 01:53:44.088608 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-06 01:53:44.088620 | orchestrator | 2026-04-06 01:53:44.088633 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-06 01:53:44.876465 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:44.876553 | orchestrator | 2026-04-06 01:53:44.876564 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-06 01:53:45.633510 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:45.633595 | orchestrator | 2026-04-06 01:53:45.633609 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-06 01:53:46.376574 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:46.376686 | orchestrator | 2026-04-06 01:53:46.376704 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-06 01:53:46.456163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-06 01:53:46.456296 | orchestrator | 2026-04-06 01:53:46.456314 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-06 01:53:46.514996 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:46.515101 | orchestrator | 2026-04-06 01:53:46.515119 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-06 01:53:47.322304 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-06 01:53:47.322400 | orchestrator | 2026-04-06 01:53:47.322413 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-06 01:53:47.427034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-06 01:53:47.427137 | orchestrator | 2026-04-06 01:53:47.427231 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-06 01:53:48.222668 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:48.222794 | orchestrator | 2026-04-06 01:53:48.222825 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-06 01:53:48.916951 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:48.917039 | orchestrator | 2026-04-06 01:53:48.917052 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-06 01:53:48.969903 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:53:48.970002 | orchestrator | 2026-04-06 01:53:48.970071 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-06 01:53:49.039030 | orchestrator | ok: [testbed-manager] 2026-04-06 01:53:49.039129 | orchestrator | 2026-04-06 01:53:49.039147 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-06 01:53:49.918641 | orchestrator | changed: [testbed-manager] 2026-04-06 01:53:49.918720 | orchestrator | 2026-04-06 01:53:49.918729 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-06 01:55:10.814548 | orchestrator | changed: [testbed-manager] 2026-04-06 01:55:10.814635 | orchestrator | 2026-04-06 01:55:10.814645 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-06 01:55:11.979181 | orchestrator | ok: [testbed-manager] 2026-04-06 01:55:11.979291 | orchestrator | 2026-04-06 01:55:11.979310 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-06 01:55:12.033279 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:55:12.033374 | orchestrator | 2026-04-06 01:55:12.033387 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-06 01:55:14.893221 | orchestrator | changed: [testbed-manager] 2026-04-06 01:55:14.893366 | orchestrator | 2026-04-06 01:55:14.893392 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-06 01:55:15.007285 | orchestrator | ok: [testbed-manager] 2026-04-06 01:55:15.007377 | orchestrator | 2026-04-06 01:55:15.007391 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-06 01:55:15.007401 | orchestrator | 2026-04-06 01:55:15.007410 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-06 01:55:15.072076 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:55:15.072203 | orchestrator | 2026-04-06 01:55:15.072220 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-06 01:56:15.125773 | orchestrator | Pausing for 60 seconds 2026-04-06 01:56:15.125890 | orchestrator | changed: [testbed-manager] 2026-04-06 01:56:15.125911 | orchestrator | 2026-04-06 01:56:15.125929 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-06 01:56:18.776009 | orchestrator | changed: [testbed-manager] 2026-04-06 01:56:18.776140 | orchestrator | 2026-04-06 01:56:18.776158 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-06 01:57:21.190235 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-06 01:57:21.190395 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-06 01:57:21.190433 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-06 01:57:21.190447 | orchestrator | changed: [testbed-manager] 2026-04-06 01:57:21.190460 | orchestrator | 2026-04-06 01:57:21.190473 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-06 01:57:34.102260 | orchestrator | changed: [testbed-manager] 2026-04-06 01:57:34.102432 | orchestrator | 2026-04-06 01:57:34.102449 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-06 01:57:34.204101 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-06 01:57:34.204187 | orchestrator | 2026-04-06 01:57:34.204196 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-06 01:57:34.204203 | orchestrator | 2026-04-06 01:57:34.204210 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-06 01:57:34.258713 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:57:34.258786 | orchestrator | 2026-04-06 01:57:34.258798 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-06 01:57:34.349936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-06 01:57:34.350068 | orchestrator | 2026-04-06 01:57:34.350081 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-06 01:57:35.205929 | orchestrator | changed: [testbed-manager] 2026-04-06 01:57:35.206089 | orchestrator | 2026-04-06 01:57:35.206108 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-06 01:57:38.842758 | orchestrator | ok: [testbed-manager] 2026-04-06 01:57:38.842836 | orchestrator | 2026-04-06 01:57:38.842845 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-06 01:57:38.921161 | orchestrator | ok: [testbed-manager] => { 2026-04-06 01:57:38.921256 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-06 01:57:38.921271 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-06 01:57:38.921283 | orchestrator | "Checking running containers against expected versions...", 2026-04-06 01:57:38.921295 | orchestrator | "", 2026-04-06 01:57:38.921368 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-06 01:57:38.921380 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-06 01:57:38.921392 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.921404 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-06 01:57:38.921427 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.921438 | orchestrator | "", 2026-04-06 01:57:38.921450 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-06 01:57:38.921489 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-06 01:57:38.921501 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.921512 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-06 01:57:38.921523 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.921534 | orchestrator | "", 2026-04-06 01:57:38.921545 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-06 01:57:38.921556 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-06 01:57:38.921567 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.921578 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-06 01:57:38.921588 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.921599 | orchestrator | "", 2026-04-06 01:57:38.921610 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-06 01:57:38.921621 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-06 01:57:38.921632 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.921643 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-06 01:57:38.921654 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.921665 | orchestrator | "", 2026-04-06 01:57:38.921678 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-06 01:57:38.921689 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-06 01:57:38.921702 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.921715 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-06 01:57:38.921727 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.921739 | orchestrator | "", 2026-04-06 01:57:38.921751 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-06 01:57:38.921765 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.921777 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.921790 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.921802 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.921815 | orchestrator | "", 2026-04-06 01:57:38.921828 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-06 01:57:38.921840 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-06 01:57:38.921853 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.921866 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-06 01:57:38.921878 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.921890 | orchestrator | "", 2026-04-06 01:57:38.921902 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-06 01:57:38.921915 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-06 01:57:38.921927 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.921940 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-06 01:57:38.921952 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.921965 | orchestrator | "", 2026-04-06 01:57:38.921977 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-06 01:57:38.921990 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-06 01:57:38.922002 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.922014 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-06 01:57:38.922085 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.922098 | orchestrator | "", 2026-04-06 01:57:38.922111 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-06 01:57:38.922124 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-06 01:57:38.922135 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.922146 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-06 01:57:38.922157 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.922167 | orchestrator | "", 2026-04-06 01:57:38.922178 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-06 01:57:38.922198 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.922209 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.922220 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.922230 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.922241 | orchestrator | "", 2026-04-06 01:57:38.922252 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-06 01:57:38.922262 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.922273 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.922284 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.922294 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.922362 | orchestrator | "", 2026-04-06 01:57:38.922378 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-06 01:57:38.922396 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.922416 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.922435 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.922453 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.922468 | orchestrator | "", 2026-04-06 01:57:38.922479 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-06 01:57:38.922490 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.922500 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.922511 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.922541 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.922553 | orchestrator | "", 2026-04-06 01:57:38.922564 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-06 01:57:38.922574 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.922595 | orchestrator | " Enabled: true", 2026-04-06 01:57:38.922607 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-06 01:57:38.922617 | orchestrator | " Status: ✅ MATCH", 2026-04-06 01:57:38.922628 | orchestrator | "", 2026-04-06 01:57:38.922639 | orchestrator | "=== Summary ===", 2026-04-06 01:57:38.922650 | orchestrator | "Errors (version mismatches): 0", 2026-04-06 01:57:38.922661 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-06 01:57:38.922671 | orchestrator | "", 2026-04-06 01:57:38.922682 | orchestrator | "✅ All running containers match expected versions!" 2026-04-06 01:57:38.922693 | orchestrator | ] 2026-04-06 01:57:38.922704 | orchestrator | } 2026-04-06 01:57:38.922715 | orchestrator | 2026-04-06 01:57:38.922727 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-06 01:57:38.986669 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:57:38.986760 | orchestrator | 2026-04-06 01:57:38.986775 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 01:57:38.986787 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-06 01:57:38.986797 | orchestrator | 2026-04-06 01:57:39.165519 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-06 01:57:39.165639 | orchestrator | + deactivate 2026-04-06 01:57:39.165665 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-06 01:57:39.165686 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 01:57:39.165705 | orchestrator | + export PATH 2026-04-06 01:57:39.165723 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-06 01:57:39.165741 | orchestrator | + '[' -n '' ']' 2026-04-06 01:57:39.165760 | orchestrator | + hash -r 2026-04-06 01:57:39.165779 | orchestrator | + '[' -n '' ']' 2026-04-06 01:57:39.165796 | orchestrator | + unset VIRTUAL_ENV 2026-04-06 01:57:39.165813 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-06 01:57:39.165831 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-06 01:57:39.165849 | orchestrator | + unset -f deactivate 2026-04-06 01:57:39.165868 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-06 01:57:39.176691 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-06 01:57:39.176803 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-06 01:57:39.176859 | orchestrator | + local max_attempts=60 2026-04-06 01:57:39.176880 | orchestrator | + local name=ceph-ansible 2026-04-06 01:57:39.176898 | orchestrator | + local attempt_num=1 2026-04-06 01:57:39.177258 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 01:57:39.215458 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-06 01:57:39.215554 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-06 01:57:39.215571 | orchestrator | + local max_attempts=60 2026-04-06 01:57:39.215589 | orchestrator | + local name=kolla-ansible 2026-04-06 01:57:39.215605 | orchestrator | + local attempt_num=1 2026-04-06 01:57:39.216031 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-06 01:57:39.262404 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-06 01:57:39.262499 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-06 01:57:39.262513 | orchestrator | + local max_attempts=60 2026-04-06 01:57:39.262526 | orchestrator | + local name=osism-ansible 2026-04-06 01:57:39.262538 | orchestrator | + local attempt_num=1 2026-04-06 01:57:39.263113 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-06 01:57:39.312293 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-06 01:57:39.312430 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-06 01:57:39.312442 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-06 01:57:40.146871 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-06 01:57:40.353146 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-06 01:57:40.353253 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-06 01:57:40.353268 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-06 01:57:40.353279 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-06 01:57:40.353291 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-04-06 01:57:40.353348 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-06 01:57:40.353361 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-06 01:57:40.353371 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-06 01:57:40.353381 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-06 01:57:40.353390 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-06 01:57:40.353400 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-06 01:57:40.353410 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-06 01:57:40.353420 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-06 01:57:40.353452 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-06 01:57:40.353463 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-06 01:57:40.353473 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-06 01:57:40.360357 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-06 01:57:40.404034 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 01:57:40.404171 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-06 01:57:40.409466 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-06 01:57:52.933767 | orchestrator | 2026-04-06 01:57:52 | INFO  | Task 9aa3a3e8-5630-43a9-a715-3ae427edba91 (resolvconf) was prepared for execution. 2026-04-06 01:57:52.933889 | orchestrator | 2026-04-06 01:57:52 | INFO  | It takes a moment until task 9aa3a3e8-5630-43a9-a715-3ae427edba91 (resolvconf) has been started and output is visible here. 2026-04-06 01:58:09.127390 | orchestrator | 2026-04-06 01:58:09.127512 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-06 01:58:09.127529 | orchestrator | 2026-04-06 01:58:09.127541 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 01:58:09.127553 | orchestrator | Monday 06 April 2026 01:57:57 +0000 (0:00:00.161) 0:00:00.161 ********** 2026-04-06 01:58:09.127565 | orchestrator | ok: [testbed-manager] 2026-04-06 01:58:09.127576 | orchestrator | 2026-04-06 01:58:09.127588 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-06 01:58:09.127600 | orchestrator | Monday 06 April 2026 01:58:01 +0000 (0:00:04.170) 0:00:04.331 ********** 2026-04-06 01:58:09.127611 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:58:09.127623 | orchestrator | 2026-04-06 01:58:09.127634 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-06 01:58:09.127645 | orchestrator | Monday 06 April 2026 01:58:01 +0000 (0:00:00.082) 0:00:04.414 ********** 2026-04-06 01:58:09.127657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-06 01:58:09.127669 | orchestrator | 2026-04-06 01:58:09.127680 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-06 01:58:09.127691 | orchestrator | Monday 06 April 2026 01:58:02 +0000 (0:00:00.080) 0:00:04.495 ********** 2026-04-06 01:58:09.127724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-06 01:58:09.127736 | orchestrator | 2026-04-06 01:58:09.127747 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-06 01:58:09.127758 | orchestrator | Monday 06 April 2026 01:58:02 +0000 (0:00:00.106) 0:00:04.601 ********** 2026-04-06 01:58:09.127769 | orchestrator | ok: [testbed-manager] 2026-04-06 01:58:09.127780 | orchestrator | 2026-04-06 01:58:09.127791 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-06 01:58:09.127802 | orchestrator | Monday 06 April 2026 01:58:03 +0000 (0:00:01.436) 0:00:06.037 ********** 2026-04-06 01:58:09.127813 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:58:09.127824 | orchestrator | 2026-04-06 01:58:09.127835 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-06 01:58:09.127846 | orchestrator | Monday 06 April 2026 01:58:03 +0000 (0:00:00.071) 0:00:06.109 ********** 2026-04-06 01:58:09.127888 | orchestrator | ok: [testbed-manager] 2026-04-06 01:58:09.127902 | orchestrator | 2026-04-06 01:58:09.127914 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-06 01:58:09.127928 | orchestrator | Monday 06 April 2026 01:58:04 +0000 (0:00:00.606) 0:00:06.716 ********** 2026-04-06 01:58:09.127940 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:58:09.127953 | orchestrator | 2026-04-06 01:58:09.127966 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-06 01:58:09.127980 | orchestrator | Monday 06 April 2026 01:58:04 +0000 (0:00:00.085) 0:00:06.802 ********** 2026-04-06 01:58:09.127993 | orchestrator | changed: [testbed-manager] 2026-04-06 01:58:09.128006 | orchestrator | 2026-04-06 01:58:09.128032 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-06 01:58:09.128046 | orchestrator | Monday 06 April 2026 01:58:04 +0000 (0:00:00.629) 0:00:07.432 ********** 2026-04-06 01:58:09.128059 | orchestrator | changed: [testbed-manager] 2026-04-06 01:58:09.128077 | orchestrator | 2026-04-06 01:58:09.128096 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-06 01:58:09.128114 | orchestrator | Monday 06 April 2026 01:58:06 +0000 (0:00:01.241) 0:00:08.673 ********** 2026-04-06 01:58:09.128132 | orchestrator | ok: [testbed-manager] 2026-04-06 01:58:09.128149 | orchestrator | 2026-04-06 01:58:09.128168 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-06 01:58:09.128187 | orchestrator | Monday 06 April 2026 01:58:07 +0000 (0:00:01.068) 0:00:09.741 ********** 2026-04-06 01:58:09.128206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-06 01:58:09.128224 | orchestrator | 2026-04-06 01:58:09.128242 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-06 01:58:09.128259 | orchestrator | Monday 06 April 2026 01:58:07 +0000 (0:00:00.087) 0:00:09.829 ********** 2026-04-06 01:58:09.128278 | orchestrator | changed: [testbed-manager] 2026-04-06 01:58:09.128294 | orchestrator | 2026-04-06 01:58:09.128334 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 01:58:09.128356 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 01:58:09.128374 | orchestrator | 2026-04-06 01:58:09.128393 | orchestrator | 2026-04-06 01:58:09.128411 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 01:58:09.128429 | orchestrator | Monday 06 April 2026 01:58:08 +0000 (0:00:01.398) 0:00:11.227 ********** 2026-04-06 01:58:09.128448 | orchestrator | =============================================================================== 2026-04-06 01:58:09.128466 | orchestrator | Gathering Facts --------------------------------------------------------- 4.17s 2026-04-06 01:58:09.128484 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.44s 2026-04-06 01:58:09.128504 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.40s 2026-04-06 01:58:09.128524 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.24s 2026-04-06 01:58:09.128543 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.07s 2026-04-06 01:58:09.128562 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.63s 2026-04-06 01:58:09.128611 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.61s 2026-04-06 01:58:09.128636 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.11s 2026-04-06 01:58:09.128665 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-04-06 01:58:09.128680 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-04-06 01:58:09.128697 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-04-06 01:58:09.128713 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-06 01:58:09.128747 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-04-06 01:58:09.649447 | orchestrator | + osism apply sshconfig 2026-04-06 01:58:22.298104 | orchestrator | 2026-04-06 01:58:22 | INFO  | Task d6a118cf-9eef-4bda-9d6b-68ea06e3a17e (sshconfig) was prepared for execution. 2026-04-06 01:58:22.298214 | orchestrator | 2026-04-06 01:58:22 | INFO  | It takes a moment until task d6a118cf-9eef-4bda-9d6b-68ea06e3a17e (sshconfig) has been started and output is visible here. 2026-04-06 01:58:35.801106 | orchestrator | 2026-04-06 01:58:35.801221 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-06 01:58:35.801238 | orchestrator | 2026-04-06 01:58:35.801272 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-06 01:58:35.801284 | orchestrator | Monday 06 April 2026 01:58:27 +0000 (0:00:00.188) 0:00:00.188 ********** 2026-04-06 01:58:35.801296 | orchestrator | ok: [testbed-manager] 2026-04-06 01:58:35.801308 | orchestrator | 2026-04-06 01:58:35.801397 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-06 01:58:35.801423 | orchestrator | Monday 06 April 2026 01:58:27 +0000 (0:00:00.623) 0:00:00.811 ********** 2026-04-06 01:58:35.801441 | orchestrator | changed: [testbed-manager] 2026-04-06 01:58:35.801462 | orchestrator | 2026-04-06 01:58:35.801481 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-06 01:58:35.801500 | orchestrator | Monday 06 April 2026 01:58:28 +0000 (0:00:00.633) 0:00:01.444 ********** 2026-04-06 01:58:35.801518 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-06 01:58:35.801538 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-06 01:58:35.801558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-06 01:58:35.801578 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-06 01:58:35.801598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-06 01:58:35.801615 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-06 01:58:35.801627 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-06 01:58:35.801638 | orchestrator | 2026-04-06 01:58:35.801650 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-06 01:58:35.801663 | orchestrator | Monday 06 April 2026 01:58:34 +0000 (0:00:06.385) 0:00:07.830 ********** 2026-04-06 01:58:35.801675 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:58:35.801688 | orchestrator | 2026-04-06 01:58:35.801700 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-06 01:58:35.801713 | orchestrator | Monday 06 April 2026 01:58:34 +0000 (0:00:00.090) 0:00:07.920 ********** 2026-04-06 01:58:35.801726 | orchestrator | changed: [testbed-manager] 2026-04-06 01:58:35.801739 | orchestrator | 2026-04-06 01:58:35.801752 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 01:58:35.801765 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 01:58:35.801778 | orchestrator | 2026-04-06 01:58:35.801790 | orchestrator | 2026-04-06 01:58:35.801803 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 01:58:35.801816 | orchestrator | Monday 06 April 2026 01:58:35 +0000 (0:00:00.667) 0:00:08.587 ********** 2026-04-06 01:58:35.801829 | orchestrator | =============================================================================== 2026-04-06 01:58:35.801841 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.39s 2026-04-06 01:58:35.801854 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.67s 2026-04-06 01:58:35.801867 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.63s 2026-04-06 01:58:35.801879 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.62s 2026-04-06 01:58:35.801918 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-04-06 01:58:36.199915 | orchestrator | + osism apply known-hosts 2026-04-06 01:58:48.481978 | orchestrator | 2026-04-06 01:58:48 | INFO  | Task 9e10cc41-1993-493b-ae72-e15267ef6006 (known-hosts) was prepared for execution. 2026-04-06 01:58:48.482192 | orchestrator | 2026-04-06 01:58:48 | INFO  | It takes a moment until task 9e10cc41-1993-493b-ae72-e15267ef6006 (known-hosts) has been started and output is visible here. 2026-04-06 01:59:06.934815 | orchestrator | 2026-04-06 01:59:06.934904 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-06 01:59:06.934914 | orchestrator | 2026-04-06 01:59:06.934922 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-06 01:59:06.934930 | orchestrator | Monday 06 April 2026 01:58:53 +0000 (0:00:00.186) 0:00:00.186 ********** 2026-04-06 01:59:06.934938 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-06 01:59:06.934945 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-06 01:59:06.934953 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-06 01:59:06.934960 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-06 01:59:06.934966 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-06 01:59:06.934973 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-06 01:59:06.934980 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-06 01:59:06.934986 | orchestrator | 2026-04-06 01:59:06.934993 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-06 01:59:06.935001 | orchestrator | Monday 06 April 2026 01:58:59 +0000 (0:00:06.411) 0:00:06.598 ********** 2026-04-06 01:59:06.935009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-06 01:59:06.935017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-06 01:59:06.935024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-06 01:59:06.935031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-06 01:59:06.935038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-06 01:59:06.935052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-06 01:59:06.935059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-06 01:59:06.935066 | orchestrator | 2026-04-06 01:59:06.935073 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:06.935080 | orchestrator | Monday 06 April 2026 01:58:59 +0000 (0:00:00.175) 0:00:06.773 ********** 2026-04-06 01:59:06.935086 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILRqGKZl9k26h5LpISZqCu09+51TGpi/xLh/pIKpOufC) 2026-04-06 01:59:06.935100 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfURP57cSr/6MuS/t2GvqifxtnUJ8SIUBN547eqQnxe0TEMpbjNulybsuHwPeiBVhUD0Zbxi5/RiWSYWfSNEW5siDc5SGO6wwOobP7hzboWVOKxGcziKEjhKGz/c9y/9mh8gR0NOsD/8evdHk0+qOxSj33FMvfhPHnnUM83z8fs0sRQ51YS4MiOvHSLvWLA5VJQqRFoDZLN5/3vEu6bg+WpXipaynCPIryT80NFQLYEOVyWTTOzn95GPU7e8eXTQJ4hZbQqes/WywDgYwEMooornxuq5Ii6mp9Z3z/JIyOfpjhKQsB+tqi775MGQwnx3pqXojBGlxfHcrOgijGbaX+bXtehfszcKhlWMZX0GRzEIewYBfrBhXbkWQNQknWGFnfavkNSpGK4PPHWp8uooA4hTg5PWnR8VShlCRXHC9nVCXvgk1OyLPSw+78qc/xKE/r51SzlfpfXIHi50K8GAzCt6TUzEGWfGcbJ8N6N7Jj2DGTbN0TE7TGixLk443TV0E=) 2026-04-06 01:59:06.935144 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI8Y0HsepKUyZCOZdtXjQwNc1YRZoe34BbZEqiVmSrOGgfRh4Eeay5G9+bIEOzbr095pPImQeju7T3C30aa0ww4=) 2026-04-06 01:59:06.935154 | orchestrator | 2026-04-06 01:59:06.935161 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:06.935176 | orchestrator | Monday 06 April 2026 01:59:00 +0000 (0:00:01.300) 0:00:08.074 ********** 2026-04-06 01:59:06.935195 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+SBDEpq9WAm6SpJCjmlnbPtpsP+/xy8TNccYijGbwpac4scnxncgkMgrYqJF64MtyU0BPgqVwOzFBXd46wfMXHHtxLLCDScQT8159s8u3Utzff+YwN6LK6uQqnJNM6Y76xBlRmJ4EPEipDTS1twoz/j1IQb+xobuN3ROmhrkeZAmzA4vd5Q9C0a5x3x3DHp5qeDlZkJHdkUFyOkskkP6KgFDjm5L4Urgq/ZOjweBhiPsgk8pJA4eHIZyxbgm/yUMDKNKKADvMAwK4f5iRS5C0SeK/Y4/NpzXkmaJ9sudq0tRHxuUMu3TG/Dmo4cAHAQNL2fMcKM9UsZ2JCkL+nR0awcs8hD4KdPug7em0rPF22nYIwcP23Gekpds7pSt0z/HRRDAh7ddRJlFQG13ObmRrEv6EX4c8cAcuDV1ZZEiHrEBxlvD5LcRPvAxVjB5O/CSqt1EDL/3Kjt7w6gq3G+ipGwRu3jqVC8hlsN8/nsbKjHJqdmcdxDH89F5S2TQpAEM=) 2026-04-06 01:59:06.935203 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPp1hlC40kUUlmMzoRfUJSUbD5W5IjKb9sV+Mr87S0e9dSqW341Hri994RSqyAeY5HjvkFiZWR7+ktv2H62QX2s=) 2026-04-06 01:59:06.935210 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHJ0HifhB5dXyG1dop4eko9Yn7yfgNHHEshA6448SXyn) 2026-04-06 01:59:06.935217 | orchestrator | 2026-04-06 01:59:06.935224 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:06.935231 | orchestrator | Monday 06 April 2026 01:59:02 +0000 (0:00:01.196) 0:00:09.271 ********** 2026-04-06 01:59:06.935238 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC24iXJkf7UUa571QVOTqLWd3NSprgzjAa+KGWe+sC5SScPU8/kwYpzLNLpqbJlYVb0D0aEC+UcR+6Z8bFTbDSyvcC8s5aNTIIaR8IhqPT8B8STxQ3dQH7pRkaU8hTj4/AwTUe8QNitdkrmt9LO0uZdC91oAUSgtZscX7+PcFrAsE8LcDPSa6Tfgsx8VjmmxsMk5NnFM0YBgA7NE1WPjbuQ7QDt7deKF9FN+Jf8gWyg4+Smb7jo4jBCkB3fAPUkUbSE/lbUoBuooZ1bbaUWwfzDgQ4tQylnBJUyAR6cd6M4eAd4E2t38ZXl5lSNauX1n8BwbpHGXcO2a59kaXFs/5VWDKV+2+P7YP1a1hkyy+1VDfhPMnM4oZQfjCvfFgS5C8IuzeGDuNYJVVQAPYXnGWHORzx9AkLDm9SzYkIcf0BdE3WFZY4A9liOyRsEHS89TwP/JtN44LqcZVeZgkjtMbSSy+gTbqoiLHGecP31j4I/d2+63/6nFA80UxTaUkS+Nrs=) 2026-04-06 01:59:06.935245 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI5ba35AWUWM3lW9z079XBsby2jYxVIOwFPDmLYPAOG7SrebOnBQtw+Vii3f6JRgOWXoDQc2+ZfhBSDRu9tRSoA=) 2026-04-06 01:59:06.935252 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL/REYseH6fOlHDWNtRWPdLdNZD7Vf7gQ2i29kGCngSQ) 2026-04-06 01:59:06.935259 | orchestrator | 2026-04-06 01:59:06.935266 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:06.935273 | orchestrator | Monday 06 April 2026 01:59:03 +0000 (0:00:01.238) 0:00:10.509 ********** 2026-04-06 01:59:06.935279 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx+HaNcXQ3rEHpwnLF9gjZAOpnz0j4UYvwrgDmLyzzegJI05izHmHFxkMNJckfF7dQU24865fDofjgAqnGqb1gl3tmoL6pa90LA2NSKFOxk6yF6A77fPQl5HV6N6vXUYo5TXGfUqrgiFBvOSnFsJyJokzTAiqnllPwG0x7YoTaAW4g1/46gxj3XdSSSZ8+VP50Z/VhL4RSjhDOux6slqgtWDn5mHygjwbAaYBbIkSEQgLLXvUeoSQXcNC+npbjHFbCWw8htxrxLjjubQx584qOk2Sz51QEUMeO3+jKtFlVW0LaL5Qcx68wGyJ8c8xJcB8cF+pgVpsdjhAQdhO4BX1T5Ex5KVei7e0DAr0TmlejSElJXdQGKg1QafLUJdc60e7ZSeKxwU6x/e0Z7AccUFaA/dXLBAliJuwWSDzL2zit+JZhZmBkZgrcvN9noLQ1DZiSmGM+3BXKCRDH27wy7157wh2u0KH+rkv4sOFd8DGgVxNEbLSiAygpKuaWqwH7j5U=) 2026-04-06 01:59:06.935292 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDYqvsylddwD/Ualudk5qKKI+vHeTcWxkqT+g4xD4QIPyYizDImpVs5vnlbFpYLA/24z2mM5/Wqps1GEJiOgp18=) 2026-04-06 01:59:06.935299 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICm7u3lUzBjSY9f6AYfFu0GSoaW4hkLEeJ34M8pW21Dj) 2026-04-06 01:59:06.935306 | orchestrator | 2026-04-06 01:59:06.935312 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:06.935319 | orchestrator | Monday 06 April 2026 01:59:04 +0000 (0:00:01.141) 0:00:11.650 ********** 2026-04-06 01:59:06.935492 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCojdWh1VZro/eI9DCBbYMfepqo62kBndj/KSEc8iaRb5a5iDaOOKOGQHSwYtHaQ1tvlIXIFqlbip31GmrmrYlsgpXc7J4khsCRW/8yWDdv3iE5OSgkCXwT9NFQir6S9alccisv/+gFgysUKxYI2Q5u5lXOl/WRze3OTcVw2OtdmH3IZW7bIwEtpZKhR2mOt161JeTNeDIPQjDcXKstxWsrT8BanBdY5BHmtJqn9knqRw/lwSpBykcnp0eG/7NFLmDUKW3WzUpVk3jQ90jhdwfgHcmYHTt4LwHqeCzIX0cavCPuRvYtEyaHEVjIzEbS7hnbQXHe63hDfm5u9nB79X+jtkXOzusVl+ZMfEmAjzpmaCUO7siLWB7HyJDvHfo9s/tSeJTNT+Wwfyx/XIaRIWy1iHdHmfh7M4qE7vlJkTZ6O7l689erdtDXzhEHL53wAQGfTncZ8NcrMuE1RGOelCyg0E1e0Qp3zQALyTmzVk1Qy+xh9JpLy3i06z6y5fLi72k=) 2026-04-06 01:59:06.935520 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBInuuKPxuj6Bhx+XUfLBlbk0XXAAf8JdIRZ2poi+l7WixfO/A3rP4FIM3o0kWdHMDiTNS9mUvRDykKDsbSurCrI=) 2026-04-06 01:59:06.935530 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPX2klYVu98XNyz0KNrdMi2jPMYd/T7q1ZO29D2kJsHk) 2026-04-06 01:59:06.935538 | orchestrator | 2026-04-06 01:59:06.935546 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:06.935555 | orchestrator | Monday 06 April 2026 01:59:05 +0000 (0:00:01.180) 0:00:12.831 ********** 2026-04-06 01:59:06.935574 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDvTbedwXbsTCAd0abxbUFv9YS4qdO+1eBK71dYuff6I9V8du/MOtbyuytMlavr+a9ytZVbDpgN5Vu1+Hk+XmiICOlDPFRz0AtlYs0yeGdshUlMtWZ4Fo1lKvWz4iNiHG4N1NfnqEVjdbbfOkcaJ4Ce677rENGshvhVst98ClEGQOMyPQBkc7d6J028HuVVL9EVewiE427hb9gmZktu3E1snnIEDnHsp4mJUZxcOdJOAYdH5iBeT0tSbqzgd4MqzvEOlLwplS3LewDM9S4I29F5uWeXZajZEKO0tZjKe+0Zftlq/S11YYrHkoC5e2VgIeq0gi/kUYM8wdzsZAhke/kD4F2+w8BavLIGyfGqdTq5xVlCk7iWq0jTzUzsmfity0YPletUZApEWMOZx0kap9tNN4QeArwKVbh8hKJ1UKOkvn2yoHinPSoP8e3dmrVR1NxL7lw+DjduJXuf0wveG3n8Yxr3WqclCGUbtAyeczIkobI2BZPKSTfA2Qrm6YiOWdM=) 2026-04-06 01:59:19.027757 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLj17saBPlkpKR5oALArwXiX8Cb6FzOi996Jdv7vc5AH5jLGC/t7YU/RWu2T3txezUZd/3O1sxhoOnaIXM0dj4Y=) 2026-04-06 01:59:19.027891 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHO6Yr4RIGOMqsNDQwVjpYp+rFVBibkbFoam8v9u7ooE) 2026-04-06 01:59:19.027910 | orchestrator | 2026-04-06 01:59:19.027923 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:19.027936 | orchestrator | Monday 06 April 2026 01:59:06 +0000 (0:00:01.204) 0:00:14.035 ********** 2026-04-06 01:59:19.027950 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD1BuHUduGX85h/sAENeSKf+Rv5Twa7rU6PPZhKJpTfKPvolbJ7xjSASu2Ton4amW8oJ04rzqRrzW9fio+fkrQTiXI5ZS8iestfBdsrYvgA/teNuZfs33njSTHl5DpsJUOcvEgMTcE5qPvtZKTQwcC9G5vNnsXUWg3fejB6C05MCqaqEJtek1RchBnqIPOkrkG98rCJ2hXPne1bLq4wzQS/uCBVXN09SSlzdLY6Upx66g/dHjebBDxi5VRcP8nMzflBtlzlF2Jpx1zf/LwQmDlvUKj6nrltRO0YWhFjxo/d6KXeEiTfNwhzDjgQ0++cKgNKlCX/nGWM578VLYE4M9KshAWsK+BN6shCqWtyAt/6Xevwv7yd1PUUs5Jag3B8kNnTOrMaTvU9K7r2a/e0NiODrFmkniR7F+p64oLbB19ZWbp/ir6U9Z1mYuBq8GmxZznqmpuMz/EqFRSy8a5py1J9QGO/eOA1gIOKCwoyjszhgs750dKCqmmHj4EN1ez5xCk=) 2026-04-06 01:59:19.027965 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHwM5Rn8ASFMmADXXePVJF3jb4YxuficeZwdtJyJ6OpObBdlWs0e7NCdxHvFPHV11gCdu3MnFcMVBYy+YB5B8cY=) 2026-04-06 01:59:19.028003 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICBSEPb3ayHGOYiKYoVGv9pFsAg/nOPm5uZmTmPIC+Oo) 2026-04-06 01:59:19.028015 | orchestrator | 2026-04-06 01:59:19.028027 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-06 01:59:19.028040 | orchestrator | Monday 06 April 2026 01:59:08 +0000 (0:00:01.207) 0:00:15.243 ********** 2026-04-06 01:59:19.028051 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-06 01:59:19.028062 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-06 01:59:19.028073 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-06 01:59:19.028084 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-06 01:59:19.028095 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-06 01:59:19.028106 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-06 01:59:19.028118 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-06 01:59:19.028128 | orchestrator | 2026-04-06 01:59:19.028140 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-06 01:59:19.028152 | orchestrator | Monday 06 April 2026 01:59:13 +0000 (0:00:05.720) 0:00:20.963 ********** 2026-04-06 01:59:19.028164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-06 01:59:19.028177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-06 01:59:19.028188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-06 01:59:19.028199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-06 01:59:19.028210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-06 01:59:19.028221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-06 01:59:19.028232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-06 01:59:19.028243 | orchestrator | 2026-04-06 01:59:19.028255 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:19.028266 | orchestrator | Monday 06 April 2026 01:59:14 +0000 (0:00:00.224) 0:00:21.188 ********** 2026-04-06 01:59:19.028278 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI8Y0HsepKUyZCOZdtXjQwNc1YRZoe34BbZEqiVmSrOGgfRh4Eeay5G9+bIEOzbr095pPImQeju7T3C30aa0ww4=) 2026-04-06 01:59:19.028312 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfURP57cSr/6MuS/t2GvqifxtnUJ8SIUBN547eqQnxe0TEMpbjNulybsuHwPeiBVhUD0Zbxi5/RiWSYWfSNEW5siDc5SGO6wwOobP7hzboWVOKxGcziKEjhKGz/c9y/9mh8gR0NOsD/8evdHk0+qOxSj33FMvfhPHnnUM83z8fs0sRQ51YS4MiOvHSLvWLA5VJQqRFoDZLN5/3vEu6bg+WpXipaynCPIryT80NFQLYEOVyWTTOzn95GPU7e8eXTQJ4hZbQqes/WywDgYwEMooornxuq5Ii6mp9Z3z/JIyOfpjhKQsB+tqi775MGQwnx3pqXojBGlxfHcrOgijGbaX+bXtehfszcKhlWMZX0GRzEIewYBfrBhXbkWQNQknWGFnfavkNSpGK4PPHWp8uooA4hTg5PWnR8VShlCRXHC9nVCXvgk1OyLPSw+78qc/xKE/r51SzlfpfXIHi50K8GAzCt6TUzEGWfGcbJ8N6N7Jj2DGTbN0TE7TGixLk443TV0E=) 2026-04-06 01:59:19.028394 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILRqGKZl9k26h5LpISZqCu09+51TGpi/xLh/pIKpOufC) 2026-04-06 01:59:19.028408 | orchestrator | 2026-04-06 01:59:19.028422 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:19.028435 | orchestrator | Monday 06 April 2026 01:59:15 +0000 (0:00:01.232) 0:00:22.420 ********** 2026-04-06 01:59:19.028449 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPp1hlC40kUUlmMzoRfUJSUbD5W5IjKb9sV+Mr87S0e9dSqW341Hri994RSqyAeY5HjvkFiZWR7+ktv2H62QX2s=) 2026-04-06 01:59:19.028463 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+SBDEpq9WAm6SpJCjmlnbPtpsP+/xy8TNccYijGbwpac4scnxncgkMgrYqJF64MtyU0BPgqVwOzFBXd46wfMXHHtxLLCDScQT8159s8u3Utzff+YwN6LK6uQqnJNM6Y76xBlRmJ4EPEipDTS1twoz/j1IQb+xobuN3ROmhrkeZAmzA4vd5Q9C0a5x3x3DHp5qeDlZkJHdkUFyOkskkP6KgFDjm5L4Urgq/ZOjweBhiPsgk8pJA4eHIZyxbgm/yUMDKNKKADvMAwK4f5iRS5C0SeK/Y4/NpzXkmaJ9sudq0tRHxuUMu3TG/Dmo4cAHAQNL2fMcKM9UsZ2JCkL+nR0awcs8hD4KdPug7em0rPF22nYIwcP23Gekpds7pSt0z/HRRDAh7ddRJlFQG13ObmRrEv6EX4c8cAcuDV1ZZEiHrEBxlvD5LcRPvAxVjB5O/CSqt1EDL/3Kjt7w6gq3G+ipGwRu3jqVC8hlsN8/nsbKjHJqdmcdxDH89F5S2TQpAEM=) 2026-04-06 01:59:19.028477 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHJ0HifhB5dXyG1dop4eko9Yn7yfgNHHEshA6448SXyn) 2026-04-06 01:59:19.028490 | orchestrator | 2026-04-06 01:59:19.028503 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:19.028517 | orchestrator | Monday 06 April 2026 01:59:16 +0000 (0:00:01.203) 0:00:23.624 ********** 2026-04-06 01:59:19.028531 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL/REYseH6fOlHDWNtRWPdLdNZD7Vf7gQ2i29kGCngSQ) 2026-04-06 01:59:19.028544 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC24iXJkf7UUa571QVOTqLWd3NSprgzjAa+KGWe+sC5SScPU8/kwYpzLNLpqbJlYVb0D0aEC+UcR+6Z8bFTbDSyvcC8s5aNTIIaR8IhqPT8B8STxQ3dQH7pRkaU8hTj4/AwTUe8QNitdkrmt9LO0uZdC91oAUSgtZscX7+PcFrAsE8LcDPSa6Tfgsx8VjmmxsMk5NnFM0YBgA7NE1WPjbuQ7QDt7deKF9FN+Jf8gWyg4+Smb7jo4jBCkB3fAPUkUbSE/lbUoBuooZ1bbaUWwfzDgQ4tQylnBJUyAR6cd6M4eAd4E2t38ZXl5lSNauX1n8BwbpHGXcO2a59kaXFs/5VWDKV+2+P7YP1a1hkyy+1VDfhPMnM4oZQfjCvfFgS5C8IuzeGDuNYJVVQAPYXnGWHORzx9AkLDm9SzYkIcf0BdE3WFZY4A9liOyRsEHS89TwP/JtN44LqcZVeZgkjtMbSSy+gTbqoiLHGecP31j4I/d2+63/6nFA80UxTaUkS+Nrs=) 2026-04-06 01:59:19.028557 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI5ba35AWUWM3lW9z079XBsby2jYxVIOwFPDmLYPAOG7SrebOnBQtw+Vii3f6JRgOWXoDQc2+ZfhBSDRu9tRSoA=) 2026-04-06 01:59:19.028568 | orchestrator | 2026-04-06 01:59:19.028579 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:19.028590 | orchestrator | Monday 06 April 2026 01:59:17 +0000 (0:00:01.205) 0:00:24.829 ********** 2026-04-06 01:59:19.028602 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDYqvsylddwD/Ualudk5qKKI+vHeTcWxkqT+g4xD4QIPyYizDImpVs5vnlbFpYLA/24z2mM5/Wqps1GEJiOgp18=) 2026-04-06 01:59:19.028614 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCx+HaNcXQ3rEHpwnLF9gjZAOpnz0j4UYvwrgDmLyzzegJI05izHmHFxkMNJckfF7dQU24865fDofjgAqnGqb1gl3tmoL6pa90LA2NSKFOxk6yF6A77fPQl5HV6N6vXUYo5TXGfUqrgiFBvOSnFsJyJokzTAiqnllPwG0x7YoTaAW4g1/46gxj3XdSSSZ8+VP50Z/VhL4RSjhDOux6slqgtWDn5mHygjwbAaYBbIkSEQgLLXvUeoSQXcNC+npbjHFbCWw8htxrxLjjubQx584qOk2Sz51QEUMeO3+jKtFlVW0LaL5Qcx68wGyJ8c8xJcB8cF+pgVpsdjhAQdhO4BX1T5Ex5KVei7e0DAr0TmlejSElJXdQGKg1QafLUJdc60e7ZSeKxwU6x/e0Z7AccUFaA/dXLBAliJuwWSDzL2zit+JZhZmBkZgrcvN9noLQ1DZiSmGM+3BXKCRDH27wy7157wh2u0KH+rkv4sOFd8DGgVxNEbLSiAygpKuaWqwH7j5U=) 2026-04-06 01:59:19.028639 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICm7u3lUzBjSY9f6AYfFu0GSoaW4hkLEeJ34M8pW21Dj) 2026-04-06 01:59:24.012387 | orchestrator | 2026-04-06 01:59:24.012479 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:24.012493 | orchestrator | Monday 06 April 2026 01:59:19 +0000 (0:00:01.296) 0:00:26.126 ********** 2026-04-06 01:59:24.012502 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBInuuKPxuj6Bhx+XUfLBlbk0XXAAf8JdIRZ2poi+l7WixfO/A3rP4FIM3o0kWdHMDiTNS9mUvRDykKDsbSurCrI=) 2026-04-06 01:59:24.012515 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCojdWh1VZro/eI9DCBbYMfepqo62kBndj/KSEc8iaRb5a5iDaOOKOGQHSwYtHaQ1tvlIXIFqlbip31GmrmrYlsgpXc7J4khsCRW/8yWDdv3iE5OSgkCXwT9NFQir6S9alccisv/+gFgysUKxYI2Q5u5lXOl/WRze3OTcVw2OtdmH3IZW7bIwEtpZKhR2mOt161JeTNeDIPQjDcXKstxWsrT8BanBdY5BHmtJqn9knqRw/lwSpBykcnp0eG/7NFLmDUKW3WzUpVk3jQ90jhdwfgHcmYHTt4LwHqeCzIX0cavCPuRvYtEyaHEVjIzEbS7hnbQXHe63hDfm5u9nB79X+jtkXOzusVl+ZMfEmAjzpmaCUO7siLWB7HyJDvHfo9s/tSeJTNT+Wwfyx/XIaRIWy1iHdHmfh7M4qE7vlJkTZ6O7l689erdtDXzhEHL53wAQGfTncZ8NcrMuE1RGOelCyg0E1e0Qp3zQALyTmzVk1Qy+xh9JpLy3i06z6y5fLi72k=) 2026-04-06 01:59:24.012525 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPX2klYVu98XNyz0KNrdMi2jPMYd/T7q1ZO29D2kJsHk) 2026-04-06 01:59:24.012534 | orchestrator | 2026-04-06 01:59:24.012541 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:24.012549 | orchestrator | Monday 06 April 2026 01:59:20 +0000 (0:00:01.214) 0:00:27.341 ********** 2026-04-06 01:59:24.012557 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDvTbedwXbsTCAd0abxbUFv9YS4qdO+1eBK71dYuff6I9V8du/MOtbyuytMlavr+a9ytZVbDpgN5Vu1+Hk+XmiICOlDPFRz0AtlYs0yeGdshUlMtWZ4Fo1lKvWz4iNiHG4N1NfnqEVjdbbfOkcaJ4Ce677rENGshvhVst98ClEGQOMyPQBkc7d6J028HuVVL9EVewiE427hb9gmZktu3E1snnIEDnHsp4mJUZxcOdJOAYdH5iBeT0tSbqzgd4MqzvEOlLwplS3LewDM9S4I29F5uWeXZajZEKO0tZjKe+0Zftlq/S11YYrHkoC5e2VgIeq0gi/kUYM8wdzsZAhke/kD4F2+w8BavLIGyfGqdTq5xVlCk7iWq0jTzUzsmfity0YPletUZApEWMOZx0kap9tNN4QeArwKVbh8hKJ1UKOkvn2yoHinPSoP8e3dmrVR1NxL7lw+DjduJXuf0wveG3n8Yxr3WqclCGUbtAyeczIkobI2BZPKSTfA2Qrm6YiOWdM=) 2026-04-06 01:59:24.012565 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLj17saBPlkpKR5oALArwXiX8Cb6FzOi996Jdv7vc5AH5jLGC/t7YU/RWu2T3txezUZd/3O1sxhoOnaIXM0dj4Y=) 2026-04-06 01:59:24.012573 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHO6Yr4RIGOMqsNDQwVjpYp+rFVBibkbFoam8v9u7ooE) 2026-04-06 01:59:24.012579 | orchestrator | 2026-04-06 01:59:24.012587 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-06 01:59:24.012603 | orchestrator | Monday 06 April 2026 01:59:21 +0000 (0:00:01.156) 0:00:28.497 ********** 2026-04-06 01:59:24.012611 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD1BuHUduGX85h/sAENeSKf+Rv5Twa7rU6PPZhKJpTfKPvolbJ7xjSASu2Ton4amW8oJ04rzqRrzW9fio+fkrQTiXI5ZS8iestfBdsrYvgA/teNuZfs33njSTHl5DpsJUOcvEgMTcE5qPvtZKTQwcC9G5vNnsXUWg3fejB6C05MCqaqEJtek1RchBnqIPOkrkG98rCJ2hXPne1bLq4wzQS/uCBVXN09SSlzdLY6Upx66g/dHjebBDxi5VRcP8nMzflBtlzlF2Jpx1zf/LwQmDlvUKj6nrltRO0YWhFjxo/d6KXeEiTfNwhzDjgQ0++cKgNKlCX/nGWM578VLYE4M9KshAWsK+BN6shCqWtyAt/6Xevwv7yd1PUUs5Jag3B8kNnTOrMaTvU9K7r2a/e0NiODrFmkniR7F+p64oLbB19ZWbp/ir6U9Z1mYuBq8GmxZznqmpuMz/EqFRSy8a5py1J9QGO/eOA1gIOKCwoyjszhgs750dKCqmmHj4EN1ez5xCk=) 2026-04-06 01:59:24.012638 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHwM5Rn8ASFMmADXXePVJF3jb4YxuficeZwdtJyJ6OpObBdlWs0e7NCdxHvFPHV11gCdu3MnFcMVBYy+YB5B8cY=) 2026-04-06 01:59:24.012646 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICBSEPb3ayHGOYiKYoVGv9pFsAg/nOPm5uZmTmPIC+Oo) 2026-04-06 01:59:24.012653 | orchestrator | 2026-04-06 01:59:24.012660 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-06 01:59:24.012689 | orchestrator | Monday 06 April 2026 01:59:22 +0000 (0:00:01.195) 0:00:29.692 ********** 2026-04-06 01:59:24.012697 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-06 01:59:24.012704 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-06 01:59:24.012710 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-06 01:59:24.012716 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-06 01:59:24.012723 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-06 01:59:24.012729 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-06 01:59:24.012735 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-06 01:59:24.012741 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:59:24.012747 | orchestrator | 2026-04-06 01:59:24.012769 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-06 01:59:24.012776 | orchestrator | Monday 06 April 2026 01:59:22 +0000 (0:00:00.181) 0:00:29.874 ********** 2026-04-06 01:59:24.012782 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:59:24.012788 | orchestrator | 2026-04-06 01:59:24.012794 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-06 01:59:24.012805 | orchestrator | Monday 06 April 2026 01:59:22 +0000 (0:00:00.073) 0:00:29.948 ********** 2026-04-06 01:59:24.012811 | orchestrator | skipping: [testbed-manager] 2026-04-06 01:59:24.012816 | orchestrator | 2026-04-06 01:59:24.012822 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-06 01:59:24.012827 | orchestrator | Monday 06 April 2026 01:59:22 +0000 (0:00:00.068) 0:00:30.017 ********** 2026-04-06 01:59:24.012832 | orchestrator | changed: [testbed-manager] 2026-04-06 01:59:24.012838 | orchestrator | 2026-04-06 01:59:24.012844 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 01:59:24.012851 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 01:59:24.012860 | orchestrator | 2026-04-06 01:59:24.012866 | orchestrator | 2026-04-06 01:59:24.012873 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 01:59:24.012880 | orchestrator | Monday 06 April 2026 01:59:23 +0000 (0:00:00.798) 0:00:30.815 ********** 2026-04-06 01:59:24.012886 | orchestrator | =============================================================================== 2026-04-06 01:59:24.012893 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.41s 2026-04-06 01:59:24.012900 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.72s 2026-04-06 01:59:24.012908 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.30s 2026-04-06 01:59:24.012915 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.30s 2026-04-06 01:59:24.012922 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-04-06 01:59:24.012929 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2026-04-06 01:59:24.012936 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-04-06 01:59:24.012943 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-04-06 01:59:24.012950 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-04-06 01:59:24.012958 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-04-06 01:59:24.012965 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-04-06 01:59:24.012972 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-04-06 01:59:24.012979 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-04-06 01:59:24.012986 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-04-06 01:59:24.012999 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-04-06 01:59:24.013005 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-04-06 01:59:24.013011 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.80s 2026-04-06 01:59:24.013017 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.22s 2026-04-06 01:59:24.013025 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-04-06 01:59:24.013032 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-04-06 01:59:24.448644 | orchestrator | + osism apply squid 2026-04-06 01:59:37.021745 | orchestrator | 2026-04-06 01:59:37 | INFO  | Task 27f06d06-3d7c-40c3-86cd-2c04807d6037 (squid) was prepared for execution. 2026-04-06 01:59:37.021877 | orchestrator | 2026-04-06 01:59:37 | INFO  | It takes a moment until task 27f06d06-3d7c-40c3-86cd-2c04807d6037 (squid) has been started and output is visible here. 2026-04-06 02:01:49.455461 | orchestrator | 2026-04-06 02:01:49.455560 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-06 02:01:49.455569 | orchestrator | 2026-04-06 02:01:49.455575 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-06 02:01:49.455581 | orchestrator | Monday 06 April 2026 01:59:41 +0000 (0:00:00.186) 0:00:00.186 ********** 2026-04-06 02:01:49.455586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-06 02:01:49.455591 | orchestrator | 2026-04-06 02:01:49.455596 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-06 02:01:49.455609 | orchestrator | Monday 06 April 2026 01:59:41 +0000 (0:00:00.100) 0:00:00.286 ********** 2026-04-06 02:01:49.455615 | orchestrator | ok: [testbed-manager] 2026-04-06 02:01:49.455620 | orchestrator | 2026-04-06 02:01:49.455625 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-06 02:01:49.455630 | orchestrator | Monday 06 April 2026 01:59:43 +0000 (0:00:02.066) 0:00:02.352 ********** 2026-04-06 02:01:49.455636 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-06 02:01:49.455640 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-06 02:01:49.455645 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-06 02:01:49.455650 | orchestrator | 2026-04-06 02:01:49.455655 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-06 02:01:49.455659 | orchestrator | Monday 06 April 2026 01:59:45 +0000 (0:00:01.325) 0:00:03.678 ********** 2026-04-06 02:01:49.455664 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-06 02:01:49.455671 | orchestrator | 2026-04-06 02:01:49.455680 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-06 02:01:49.455688 | orchestrator | Monday 06 April 2026 01:59:46 +0000 (0:00:01.207) 0:00:04.885 ********** 2026-04-06 02:01:49.455696 | orchestrator | ok: [testbed-manager] 2026-04-06 02:01:49.455703 | orchestrator | 2026-04-06 02:01:49.455711 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-06 02:01:49.455718 | orchestrator | Monday 06 April 2026 01:59:46 +0000 (0:00:00.419) 0:00:05.305 ********** 2026-04-06 02:01:49.455728 | orchestrator | changed: [testbed-manager] 2026-04-06 02:01:49.455736 | orchestrator | 2026-04-06 02:01:49.455745 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-06 02:01:49.455753 | orchestrator | Monday 06 April 2026 01:59:47 +0000 (0:00:01.065) 0:00:06.371 ********** 2026-04-06 02:01:49.455762 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-06 02:01:49.455772 | orchestrator | ok: [testbed-manager] 2026-04-06 02:01:49.455776 | orchestrator | 2026-04-06 02:01:49.455781 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-06 02:01:49.455805 | orchestrator | Monday 06 April 2026 02:00:24 +0000 (0:00:36.725) 0:00:43.096 ********** 2026-04-06 02:01:49.455810 | orchestrator | changed: [testbed-manager] 2026-04-06 02:01:49.455815 | orchestrator | 2026-04-06 02:01:49.455819 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-06 02:01:49.455824 | orchestrator | Monday 06 April 2026 02:00:48 +0000 (0:00:23.555) 0:01:06.651 ********** 2026-04-06 02:01:49.455829 | orchestrator | Pausing for 60 seconds 2026-04-06 02:01:49.455834 | orchestrator | changed: [testbed-manager] 2026-04-06 02:01:49.455838 | orchestrator | 2026-04-06 02:01:49.455843 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-06 02:01:49.455848 | orchestrator | Monday 06 April 2026 02:01:48 +0000 (0:01:00.106) 0:02:06.758 ********** 2026-04-06 02:01:49.455853 | orchestrator | ok: [testbed-manager] 2026-04-06 02:01:49.455857 | orchestrator | 2026-04-06 02:01:49.455862 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-06 02:01:49.455867 | orchestrator | Monday 06 April 2026 02:01:48 +0000 (0:00:00.073) 0:02:06.832 ********** 2026-04-06 02:01:49.455871 | orchestrator | changed: [testbed-manager] 2026-04-06 02:01:49.455876 | orchestrator | 2026-04-06 02:01:49.455880 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:01:49.455885 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:01:49.455889 | orchestrator | 2026-04-06 02:01:49.455894 | orchestrator | 2026-04-06 02:01:49.455899 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:01:49.455903 | orchestrator | Monday 06 April 2026 02:01:49 +0000 (0:00:00.687) 0:02:07.519 ********** 2026-04-06 02:01:49.455908 | orchestrator | =============================================================================== 2026-04-06 02:01:49.455912 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.11s 2026-04-06 02:01:49.455917 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 36.73s 2026-04-06 02:01:49.455921 | orchestrator | osism.services.squid : Restart squid service --------------------------- 23.56s 2026-04-06 02:01:49.455939 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.07s 2026-04-06 02:01:49.455944 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.33s 2026-04-06 02:01:49.455949 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.21s 2026-04-06 02:01:49.455953 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.07s 2026-04-06 02:01:49.455958 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.69s 2026-04-06 02:01:49.455963 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.42s 2026-04-06 02:01:49.455967 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-04-06 02:01:49.455972 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-04-06 02:01:49.864669 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-06 02:01:49.864810 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-06 02:01:49.927090 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-06 02:01:49.927214 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-06 02:01:49.932860 | orchestrator | + set -e 2026-04-06 02:01:49.932932 | orchestrator | + NAMESPACE=kolla/release 2026-04-06 02:01:49.932948 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-06 02:01:49.937871 | orchestrator | ++ semver 9.5.0 9.0.0 2026-04-06 02:01:50.011374 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-06 02:01:50.012371 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-06 02:02:02.438177 | orchestrator | 2026-04-06 02:02:02 | INFO  | Task a59c5d95-ac45-4c6a-b937-a8547d9d0135 (operator) was prepared for execution. 2026-04-06 02:02:02.438304 | orchestrator | 2026-04-06 02:02:02 | INFO  | It takes a moment until task a59c5d95-ac45-4c6a-b937-a8547d9d0135 (operator) has been started and output is visible here. 2026-04-06 02:02:19.265814 | orchestrator | 2026-04-06 02:02:19.265923 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-06 02:02:19.265938 | orchestrator | 2026-04-06 02:02:19.265949 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 02:02:19.265959 | orchestrator | Monday 06 April 2026 02:02:07 +0000 (0:00:00.185) 0:00:00.185 ********** 2026-04-06 02:02:19.265970 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:02:19.265980 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:02:19.265990 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:02:19.265999 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:02:19.266009 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:02:19.266079 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:02:19.266089 | orchestrator | 2026-04-06 02:02:19.266100 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-06 02:02:19.266110 | orchestrator | Monday 06 April 2026 02:02:10 +0000 (0:00:03.208) 0:00:03.393 ********** 2026-04-06 02:02:19.266120 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:02:19.266129 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:02:19.266139 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:02:19.266164 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:02:19.266174 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:02:19.266184 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:02:19.266194 | orchestrator | 2026-04-06 02:02:19.266204 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-06 02:02:19.266213 | orchestrator | 2026-04-06 02:02:19.266223 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-06 02:02:19.266233 | orchestrator | Monday 06 April 2026 02:02:11 +0000 (0:00:00.832) 0:00:04.226 ********** 2026-04-06 02:02:19.266247 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:02:19.266261 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:02:19.266271 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:02:19.266281 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:02:19.266290 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:02:19.266301 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:02:19.266311 | orchestrator | 2026-04-06 02:02:19.266324 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-06 02:02:19.266335 | orchestrator | Monday 06 April 2026 02:02:11 +0000 (0:00:00.205) 0:00:04.431 ********** 2026-04-06 02:02:19.266347 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:02:19.266357 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:02:19.266369 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:02:19.266380 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:02:19.266391 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:02:19.266402 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:02:19.266445 | orchestrator | 2026-04-06 02:02:19.266457 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-06 02:02:19.266489 | orchestrator | Monday 06 April 2026 02:02:11 +0000 (0:00:00.186) 0:00:04.618 ********** 2026-04-06 02:02:19.266511 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:02:19.266524 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:02:19.266541 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:02:19.266559 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:02:19.266584 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:02:19.266605 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:02:19.266620 | orchestrator | 2026-04-06 02:02:19.266637 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-06 02:02:19.266654 | orchestrator | Monday 06 April 2026 02:02:12 +0000 (0:00:00.607) 0:00:05.225 ********** 2026-04-06 02:02:19.266670 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:02:19.266685 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:02:19.266700 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:02:19.266717 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:02:19.266732 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:02:19.266748 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:02:19.266792 | orchestrator | 2026-04-06 02:02:19.266811 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-06 02:02:19.266828 | orchestrator | Monday 06 April 2026 02:02:13 +0000 (0:00:00.783) 0:00:06.008 ********** 2026-04-06 02:02:19.266846 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-06 02:02:19.266863 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-06 02:02:19.266877 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-06 02:02:19.266887 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-06 02:02:19.266896 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-06 02:02:19.266906 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-06 02:02:19.266915 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-06 02:02:19.266925 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-06 02:02:19.266934 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-06 02:02:19.266944 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-06 02:02:19.266953 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-06 02:02:19.266963 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-06 02:02:19.266973 | orchestrator | 2026-04-06 02:02:19.266982 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-06 02:02:19.266992 | orchestrator | Monday 06 April 2026 02:02:14 +0000 (0:00:01.185) 0:00:07.193 ********** 2026-04-06 02:02:19.267002 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:02:19.267011 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:02:19.267021 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:02:19.267030 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:02:19.267040 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:02:19.267050 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:02:19.267059 | orchestrator | 2026-04-06 02:02:19.267069 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-06 02:02:19.267079 | orchestrator | Monday 06 April 2026 02:02:15 +0000 (0:00:01.309) 0:00:08.503 ********** 2026-04-06 02:02:19.267089 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-06 02:02:19.267099 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-06 02:02:19.267108 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-06 02:02:19.267118 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-06 02:02:19.267149 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-06 02:02:19.267160 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-06 02:02:19.267169 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-06 02:02:19.267179 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-06 02:02:19.267189 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-06 02:02:19.267198 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-06 02:02:19.267208 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-06 02:02:19.267218 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-06 02:02:19.267227 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-06 02:02:19.267237 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-06 02:02:19.267246 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-06 02:02:19.267261 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-06 02:02:19.267276 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-06 02:02:19.267299 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-06 02:02:19.267320 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-06 02:02:19.267335 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-06 02:02:19.267363 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-06 02:02:19.267379 | orchestrator | 2026-04-06 02:02:19.267394 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-06 02:02:19.267440 | orchestrator | Monday 06 April 2026 02:02:16 +0000 (0:00:01.295) 0:00:09.798 ********** 2026-04-06 02:02:19.267456 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:02:19.267471 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:02:19.267485 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:02:19.267499 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:02:19.267514 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:02:19.267530 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:02:19.267545 | orchestrator | 2026-04-06 02:02:19.267561 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-06 02:02:19.267577 | orchestrator | Monday 06 April 2026 02:02:17 +0000 (0:00:00.195) 0:00:09.994 ********** 2026-04-06 02:02:19.267592 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:02:19.267609 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:02:19.267626 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:02:19.267642 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:02:19.267655 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:02:19.267665 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:02:19.267677 | orchestrator | 2026-04-06 02:02:19.267694 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-06 02:02:19.267710 | orchestrator | Monday 06 April 2026 02:02:17 +0000 (0:00:00.224) 0:00:10.219 ********** 2026-04-06 02:02:19.267725 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:02:19.267740 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:02:19.267757 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:02:19.267772 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:02:19.267789 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:02:19.267806 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:02:19.267822 | orchestrator | 2026-04-06 02:02:19.267838 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-06 02:02:19.267852 | orchestrator | Monday 06 April 2026 02:02:17 +0000 (0:00:00.606) 0:00:10.825 ********** 2026-04-06 02:02:19.267862 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:02:19.267871 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:02:19.267881 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:02:19.267890 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:02:19.267900 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:02:19.267909 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:02:19.267919 | orchestrator | 2026-04-06 02:02:19.267929 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-06 02:02:19.267939 | orchestrator | Monday 06 April 2026 02:02:18 +0000 (0:00:00.201) 0:00:11.026 ********** 2026-04-06 02:02:19.267949 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-06 02:02:19.267972 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-06 02:02:19.267983 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:02:19.267992 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:02:19.268002 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-06 02:02:19.268011 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:02:19.268021 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-06 02:02:19.268031 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:02:19.268040 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-06 02:02:19.268050 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:02:19.268059 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-06 02:02:19.268069 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:02:19.268078 | orchestrator | 2026-04-06 02:02:19.268088 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-06 02:02:19.268098 | orchestrator | Monday 06 April 2026 02:02:18 +0000 (0:00:00.756) 0:00:11.783 ********** 2026-04-06 02:02:19.268117 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:02:19.268126 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:02:19.268136 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:02:19.268146 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:02:19.268155 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:02:19.268164 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:02:19.268174 | orchestrator | 2026-04-06 02:02:19.268184 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-06 02:02:19.268193 | orchestrator | Monday 06 April 2026 02:02:19 +0000 (0:00:00.201) 0:00:11.985 ********** 2026-04-06 02:02:19.268203 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:02:19.268213 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:02:19.268222 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:02:19.268232 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:02:19.268253 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:02:20.737498 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:02:20.737636 | orchestrator | 2026-04-06 02:02:20.737671 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-06 02:02:20.737745 | orchestrator | Monday 06 April 2026 02:02:19 +0000 (0:00:00.242) 0:00:12.227 ********** 2026-04-06 02:02:20.737762 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:02:20.737771 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:02:20.737780 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:02:20.737788 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:02:20.737796 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:02:20.737804 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:02:20.737812 | orchestrator | 2026-04-06 02:02:20.737820 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-06 02:02:20.737828 | orchestrator | Monday 06 April 2026 02:02:19 +0000 (0:00:00.187) 0:00:12.415 ********** 2026-04-06 02:02:20.737836 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:02:20.737844 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:02:20.737868 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:02:20.737877 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:02:20.737885 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:02:20.737893 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:02:20.737900 | orchestrator | 2026-04-06 02:02:20.737908 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-06 02:02:20.737916 | orchestrator | Monday 06 April 2026 02:02:20 +0000 (0:00:00.671) 0:00:13.086 ********** 2026-04-06 02:02:20.737924 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:02:20.737932 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:02:20.737941 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:02:20.737954 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:02:20.737968 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:02:20.737981 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:02:20.737993 | orchestrator | 2026-04-06 02:02:20.738007 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:02:20.738071 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 02:02:20.738088 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 02:02:20.738101 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 02:02:20.738117 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 02:02:20.738132 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 02:02:20.738171 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 02:02:20.738181 | orchestrator | 2026-04-06 02:02:20.738192 | orchestrator | 2026-04-06 02:02:20.738202 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:02:20.738211 | orchestrator | Monday 06 April 2026 02:02:20 +0000 (0:00:00.288) 0:00:13.375 ********** 2026-04-06 02:02:20.738221 | orchestrator | =============================================================================== 2026-04-06 02:02:20.738229 | orchestrator | Gathering Facts --------------------------------------------------------- 3.21s 2026-04-06 02:02:20.738236 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.31s 2026-04-06 02:02:20.738244 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2026-04-06 02:02:20.738253 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-04-06 02:02:20.738261 | orchestrator | Do not require tty for all users ---------------------------------------- 0.83s 2026-04-06 02:02:20.738269 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.78s 2026-04-06 02:02:20.738277 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.76s 2026-04-06 02:02:20.738285 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-04-06 02:02:20.738292 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-04-06 02:02:20.738300 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-04-06 02:02:20.738308 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.29s 2026-04-06 02:02:20.738316 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.24s 2026-04-06 02:02:20.738324 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.23s 2026-04-06 02:02:20.738332 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.21s 2026-04-06 02:02:20.738340 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.20s 2026-04-06 02:02:20.738347 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2026-04-06 02:02:20.738355 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.20s 2026-04-06 02:02:20.738363 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.19s 2026-04-06 02:02:20.738371 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-04-06 02:02:21.161270 | orchestrator | + osism apply --environment custom facts 2026-04-06 02:02:23.364577 | orchestrator | 2026-04-06 02:02:23 | INFO  | Trying to run play facts in environment custom 2026-04-06 02:02:33.507458 | orchestrator | 2026-04-06 02:02:33 | INFO  | Task 0ece3a5c-10a5-4786-81b6-2fa0af5eb885 (facts) was prepared for execution. 2026-04-06 02:02:33.507553 | orchestrator | 2026-04-06 02:02:33 | INFO  | It takes a moment until task 0ece3a5c-10a5-4786-81b6-2fa0af5eb885 (facts) has been started and output is visible here. 2026-04-06 02:03:18.380350 | orchestrator | 2026-04-06 02:03:18.380434 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-06 02:03:18.380441 | orchestrator | 2026-04-06 02:03:18.380446 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-06 02:03:18.380451 | orchestrator | Monday 06 April 2026 02:02:38 +0000 (0:00:00.091) 0:00:00.091 ********** 2026-04-06 02:03:18.380457 | orchestrator | ok: [testbed-manager] 2026-04-06 02:03:18.380462 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:03:18.380490 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:03:18.380498 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:03:18.380503 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:03:18.380507 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:03:18.380530 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:03:18.380535 | orchestrator | 2026-04-06 02:03:18.380540 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-06 02:03:18.380544 | orchestrator | Monday 06 April 2026 02:02:39 +0000 (0:00:01.424) 0:00:01.516 ********** 2026-04-06 02:03:18.380549 | orchestrator | ok: [testbed-manager] 2026-04-06 02:03:18.380553 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:03:18.380557 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:03:18.380562 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:03:18.380566 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:03:18.380570 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:03:18.380574 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:03:18.380579 | orchestrator | 2026-04-06 02:03:18.380583 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-06 02:03:18.380587 | orchestrator | 2026-04-06 02:03:18.380592 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-06 02:03:18.380597 | orchestrator | Monday 06 April 2026 02:02:41 +0000 (0:00:01.283) 0:00:02.799 ********** 2026-04-06 02:03:18.380601 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:18.380605 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:18.380610 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:18.380614 | orchestrator | 2026-04-06 02:03:18.380619 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-06 02:03:18.380624 | orchestrator | Monday 06 April 2026 02:02:41 +0000 (0:00:00.115) 0:00:02.915 ********** 2026-04-06 02:03:18.380628 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:18.380633 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:18.380637 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:18.380641 | orchestrator | 2026-04-06 02:03:18.380646 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-06 02:03:18.380650 | orchestrator | Monday 06 April 2026 02:02:41 +0000 (0:00:00.212) 0:00:03.127 ********** 2026-04-06 02:03:18.380654 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:18.380659 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:18.380663 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:18.380667 | orchestrator | 2026-04-06 02:03:18.380671 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-06 02:03:18.380677 | orchestrator | Monday 06 April 2026 02:02:41 +0000 (0:00:00.202) 0:00:03.330 ********** 2026-04-06 02:03:18.380682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:03:18.380688 | orchestrator | 2026-04-06 02:03:18.380693 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-06 02:03:18.380697 | orchestrator | Monday 06 April 2026 02:02:41 +0000 (0:00:00.147) 0:00:03.478 ********** 2026-04-06 02:03:18.380701 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:18.380706 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:18.380710 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:18.380714 | orchestrator | 2026-04-06 02:03:18.380718 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-06 02:03:18.380723 | orchestrator | Monday 06 April 2026 02:02:42 +0000 (0:00:00.432) 0:00:03.910 ********** 2026-04-06 02:03:18.380727 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:03:18.380732 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:03:18.380736 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:03:18.380740 | orchestrator | 2026-04-06 02:03:18.380745 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-06 02:03:18.380749 | orchestrator | Monday 06 April 2026 02:02:42 +0000 (0:00:00.136) 0:00:04.047 ********** 2026-04-06 02:03:18.380753 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:03:18.380757 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:03:18.380762 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:03:18.380766 | orchestrator | 2026-04-06 02:03:18.380770 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-06 02:03:18.380780 | orchestrator | Monday 06 April 2026 02:02:43 +0000 (0:00:01.034) 0:00:05.082 ********** 2026-04-06 02:03:18.380784 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:18.380789 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:18.380793 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:18.380797 | orchestrator | 2026-04-06 02:03:18.380802 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-06 02:03:18.380806 | orchestrator | Monday 06 April 2026 02:02:43 +0000 (0:00:00.455) 0:00:05.537 ********** 2026-04-06 02:03:18.380810 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:03:18.380815 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:03:18.380819 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:03:18.380823 | orchestrator | 2026-04-06 02:03:18.380827 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-06 02:03:18.380866 | orchestrator | Monday 06 April 2026 02:02:44 +0000 (0:00:01.026) 0:00:06.563 ********** 2026-04-06 02:03:18.380871 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:03:18.380876 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:03:18.380880 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:03:18.380884 | orchestrator | 2026-04-06 02:03:18.380889 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-06 02:03:18.380893 | orchestrator | Monday 06 April 2026 02:03:01 +0000 (0:00:16.194) 0:00:22.757 ********** 2026-04-06 02:03:18.380897 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:03:18.380902 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:03:18.380907 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:03:18.380912 | orchestrator | 2026-04-06 02:03:18.380917 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-06 02:03:18.380933 | orchestrator | Monday 06 April 2026 02:03:01 +0000 (0:00:00.135) 0:00:22.893 ********** 2026-04-06 02:03:18.380939 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:03:18.380943 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:03:18.380948 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:03:18.380953 | orchestrator | 2026-04-06 02:03:18.380962 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-06 02:03:18.380968 | orchestrator | Monday 06 April 2026 02:03:09 +0000 (0:00:07.740) 0:00:30.633 ********** 2026-04-06 02:03:18.380975 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:18.380982 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:18.380989 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:18.380997 | orchestrator | 2026-04-06 02:03:18.381004 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-06 02:03:18.381011 | orchestrator | Monday 06 April 2026 02:03:09 +0000 (0:00:00.508) 0:00:31.142 ********** 2026-04-06 02:03:18.381018 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-06 02:03:18.381026 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-06 02:03:18.381033 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-06 02:03:18.381040 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-06 02:03:18.381046 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-06 02:03:18.381053 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-06 02:03:18.381059 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-06 02:03:18.381066 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-06 02:03:18.381072 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-06 02:03:18.381079 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-06 02:03:18.381085 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-06 02:03:18.381092 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-06 02:03:18.381098 | orchestrator | 2026-04-06 02:03:18.381105 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-06 02:03:18.381118 | orchestrator | Monday 06 April 2026 02:03:13 +0000 (0:00:03.724) 0:00:34.866 ********** 2026-04-06 02:03:18.381125 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:18.381132 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:18.381139 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:18.381146 | orchestrator | 2026-04-06 02:03:18.381152 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-06 02:03:18.381158 | orchestrator | 2026-04-06 02:03:18.381165 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-06 02:03:18.381171 | orchestrator | Monday 06 April 2026 02:03:14 +0000 (0:00:01.449) 0:00:36.316 ********** 2026-04-06 02:03:18.381177 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:03:18.381183 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:03:18.381190 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:03:18.381196 | orchestrator | ok: [testbed-manager] 2026-04-06 02:03:18.381202 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:18.381208 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:18.381214 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:18.381220 | orchestrator | 2026-04-06 02:03:18.381226 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:03:18.381234 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:03:18.381240 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:03:18.381249 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:03:18.381255 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:03:18.381261 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:03:18.381268 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:03:18.381275 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:03:18.381281 | orchestrator | 2026-04-06 02:03:18.381288 | orchestrator | 2026-04-06 02:03:18.381295 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:03:18.381301 | orchestrator | Monday 06 April 2026 02:03:18 +0000 (0:00:03.615) 0:00:39.931 ********** 2026-04-06 02:03:18.381307 | orchestrator | =============================================================================== 2026-04-06 02:03:18.381314 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.19s 2026-04-06 02:03:18.381321 | orchestrator | Install required packages (Debian) -------------------------------------- 7.74s 2026-04-06 02:03:18.381328 | orchestrator | Copy fact files --------------------------------------------------------- 3.72s 2026-04-06 02:03:18.381334 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.62s 2026-04-06 02:03:18.381340 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.45s 2026-04-06 02:03:18.381345 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2026-04-06 02:03:18.381361 | orchestrator | Copy fact file ---------------------------------------------------------- 1.28s 2026-04-06 02:03:18.755538 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2026-04-06 02:03:18.755633 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2026-04-06 02:03:18.755671 | orchestrator | Create custom facts directory ------------------------------------------- 0.51s 2026-04-06 02:03:18.755710 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-04-06 02:03:18.755723 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-04-06 02:03:18.755735 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-04-06 02:03:18.755747 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-04-06 02:03:18.755759 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-04-06 02:03:18.755772 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-04-06 02:03:18.755784 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.14s 2026-04-06 02:03:18.755797 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-04-06 02:03:19.196386 | orchestrator | + osism apply bootstrap 2026-04-06 02:03:31.794108 | orchestrator | 2026-04-06 02:03:31 | INFO  | Task 2cd56658-ac63-4faf-b8fb-aabe111245ee (bootstrap) was prepared for execution. 2026-04-06 02:03:31.794234 | orchestrator | 2026-04-06 02:03:31 | INFO  | It takes a moment until task 2cd56658-ac63-4faf-b8fb-aabe111245ee (bootstrap) has been started and output is visible here. 2026-04-06 02:03:48.512796 | orchestrator | 2026-04-06 02:03:48.512884 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-06 02:03:48.512893 | orchestrator | 2026-04-06 02:03:48.512899 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-06 02:03:48.512904 | orchestrator | Monday 06 April 2026 02:03:36 +0000 (0:00:00.173) 0:00:00.173 ********** 2026-04-06 02:03:48.512909 | orchestrator | ok: [testbed-manager] 2026-04-06 02:03:48.512914 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:48.512919 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:48.512924 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:48.512929 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:03:48.512934 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:03:48.512938 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:03:48.512944 | orchestrator | 2026-04-06 02:03:48.512948 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-06 02:03:48.512953 | orchestrator | 2026-04-06 02:03:48.512958 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-06 02:03:48.512962 | orchestrator | Monday 06 April 2026 02:03:36 +0000 (0:00:00.291) 0:00:00.465 ********** 2026-04-06 02:03:48.512967 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:03:48.512971 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:03:48.512976 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:03:48.512981 | orchestrator | ok: [testbed-manager] 2026-04-06 02:03:48.512985 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:48.512990 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:48.512994 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:48.512999 | orchestrator | 2026-04-06 02:03:48.513003 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-06 02:03:48.513008 | orchestrator | 2026-04-06 02:03:48.513012 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-06 02:03:48.513017 | orchestrator | Monday 06 April 2026 02:03:40 +0000 (0:00:03.469) 0:00:03.934 ********** 2026-04-06 02:03:48.513022 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-06 02:03:48.513027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-06 02:03:48.513031 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-06 02:03:48.513036 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-06 02:03:48.513040 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:03:48.513045 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-06 02:03:48.513050 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-06 02:03:48.513054 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-06 02:03:48.513058 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-06 02:03:48.513081 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 02:03:48.513086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:03:48.513090 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-06 02:03:48.513095 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 02:03:48.513099 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-06 02:03:48.513103 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 02:03:48.513108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:03:48.513113 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 02:03:48.513117 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 02:03:48.513122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-06 02:03:48.513126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 02:03:48.513131 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 02:03:48.513135 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 02:03:48.513140 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-06 02:03:48.513144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 02:03:48.513149 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:03:48.513153 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-06 02:03:48.513158 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:03:48.513162 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-06 02:03:48.513167 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 02:03:48.513172 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-06 02:03:48.513176 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-06 02:03:48.513181 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-06 02:03:48.513185 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:03:48.513189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 02:03:48.513194 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-06 02:03:48.513198 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-06 02:03:48.513203 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 02:03:48.513207 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-06 02:03:48.513212 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 02:03:48.513216 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-06 02:03:48.513221 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-06 02:03:48.513225 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:03:48.513230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 02:03:48.513234 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-06 02:03:48.513239 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:03:48.513243 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-06 02:03:48.513248 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 02:03:48.513263 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-06 02:03:48.513268 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 02:03:48.513273 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-06 02:03:48.513277 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 02:03:48.513282 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 02:03:48.513286 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:03:48.513291 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 02:03:48.513302 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 02:03:48.513318 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:03:48.513323 | orchestrator | 2026-04-06 02:03:48.513328 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-06 02:03:48.513332 | orchestrator | 2026-04-06 02:03:48.513337 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-06 02:03:48.513342 | orchestrator | Monday 06 April 2026 02:03:40 +0000 (0:00:00.587) 0:00:04.522 ********** 2026-04-06 02:03:48.513346 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:03:48.513351 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:03:48.513355 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:48.513361 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:03:48.513366 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:48.513372 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:48.513377 | orchestrator | ok: [testbed-manager] 2026-04-06 02:03:48.513382 | orchestrator | 2026-04-06 02:03:48.513387 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-06 02:03:48.513393 | orchestrator | Monday 06 April 2026 02:03:42 +0000 (0:00:01.256) 0:00:05.779 ********** 2026-04-06 02:03:48.513410 | orchestrator | ok: [testbed-manager] 2026-04-06 02:03:48.513422 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:03:48.513428 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:03:48.513433 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:03:48.513439 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:03:48.513444 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:03:48.513450 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:03:48.513455 | orchestrator | 2026-04-06 02:03:48.513461 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-06 02:03:48.513466 | orchestrator | Monday 06 April 2026 02:03:43 +0000 (0:00:01.263) 0:00:07.042 ********** 2026-04-06 02:03:48.513481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:03:48.513489 | orchestrator | 2026-04-06 02:03:48.513501 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-06 02:03:48.513544 | orchestrator | Monday 06 April 2026 02:03:43 +0000 (0:00:00.320) 0:00:07.363 ********** 2026-04-06 02:03:48.513555 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:03:48.513563 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:03:48.513570 | orchestrator | changed: [testbed-manager] 2026-04-06 02:03:48.513578 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:03:48.513585 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:03:48.513592 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:03:48.513598 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:03:48.513610 | orchestrator | 2026-04-06 02:03:48.513622 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-06 02:03:48.513629 | orchestrator | Monday 06 April 2026 02:03:45 +0000 (0:00:02.279) 0:00:09.643 ********** 2026-04-06 02:03:48.513635 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:03:48.513645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:03:48.513654 | orchestrator | 2026-04-06 02:03:48.513662 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-06 02:03:48.513669 | orchestrator | Monday 06 April 2026 02:03:46 +0000 (0:00:00.306) 0:00:09.950 ********** 2026-04-06 02:03:48.513676 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:03:48.513683 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:03:48.513689 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:03:48.513697 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:03:48.513704 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:03:48.513711 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:03:48.513726 | orchestrator | 2026-04-06 02:03:48.513739 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-06 02:03:48.513746 | orchestrator | Monday 06 April 2026 02:03:47 +0000 (0:00:01.011) 0:00:10.961 ********** 2026-04-06 02:03:48.513753 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:03:48.513760 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:03:48.513768 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:03:48.513772 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:03:48.513777 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:03:48.513781 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:03:48.513786 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:03:48.513790 | orchestrator | 2026-04-06 02:03:48.513795 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-06 02:03:48.513799 | orchestrator | Monday 06 April 2026 02:03:47 +0000 (0:00:00.576) 0:00:11.538 ********** 2026-04-06 02:03:48.513804 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:03:48.513808 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:03:48.513813 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:03:48.513817 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:03:48.513822 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:03:48.513826 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:03:48.513831 | orchestrator | ok: [testbed-manager] 2026-04-06 02:03:48.513835 | orchestrator | 2026-04-06 02:03:48.513840 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-06 02:03:48.513845 | orchestrator | Monday 06 April 2026 02:03:48 +0000 (0:00:00.515) 0:00:12.053 ********** 2026-04-06 02:03:48.513850 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:03:48.513854 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:03:48.513865 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:04:01.832944 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:04:01.833049 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:04:01.833065 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:04:01.833076 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:04:01.833087 | orchestrator | 2026-04-06 02:04:01.833099 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-06 02:04:01.833111 | orchestrator | Monday 06 April 2026 02:03:48 +0000 (0:00:00.231) 0:00:12.284 ********** 2026-04-06 02:04:01.833124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:04:01.833147 | orchestrator | 2026-04-06 02:04:01.833158 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-06 02:04:01.833169 | orchestrator | Monday 06 April 2026 02:03:48 +0000 (0:00:00.348) 0:00:12.633 ********** 2026-04-06 02:04:01.833179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:04:01.833190 | orchestrator | 2026-04-06 02:04:01.833200 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-06 02:04:01.833212 | orchestrator | Monday 06 April 2026 02:03:49 +0000 (0:00:00.367) 0:00:13.000 ********** 2026-04-06 02:04:01.833219 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:01.833226 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:01.833233 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:01.833239 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:01.833246 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:01.833253 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:01.833259 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.833265 | orchestrator | 2026-04-06 02:04:01.833272 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-06 02:04:01.833278 | orchestrator | Monday 06 April 2026 02:03:50 +0000 (0:00:01.675) 0:00:14.676 ********** 2026-04-06 02:04:01.833302 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:04:01.833309 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:04:01.833315 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:04:01.833321 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:04:01.833327 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:04:01.833333 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:04:01.833339 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:04:01.833346 | orchestrator | 2026-04-06 02:04:01.833352 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-06 02:04:01.833358 | orchestrator | Monday 06 April 2026 02:03:51 +0000 (0:00:00.405) 0:00:15.082 ********** 2026-04-06 02:04:01.833365 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.833371 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:01.833377 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:01.833383 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:01.833389 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:01.833395 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:01.833401 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:01.833408 | orchestrator | 2026-04-06 02:04:01.833414 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-06 02:04:01.833420 | orchestrator | Monday 06 April 2026 02:03:51 +0000 (0:00:00.561) 0:00:15.643 ********** 2026-04-06 02:04:01.833426 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:04:01.833432 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:04:01.833439 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:04:01.833445 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:04:01.833451 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:04:01.833457 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:04:01.833464 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:04:01.833471 | orchestrator | 2026-04-06 02:04:01.833480 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-06 02:04:01.833489 | orchestrator | Monday 06 April 2026 02:03:52 +0000 (0:00:00.294) 0:00:15.938 ********** 2026-04-06 02:04:01.833497 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:04:01.833505 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.833513 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:04:01.833520 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:04:01.833553 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:04:01.833564 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:04:01.833579 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:04:01.833588 | orchestrator | 2026-04-06 02:04:01.833595 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-06 02:04:01.833603 | orchestrator | Monday 06 April 2026 02:03:52 +0000 (0:00:00.554) 0:00:16.492 ********** 2026-04-06 02:04:01.833610 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.833618 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:04:01.833626 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:04:01.833633 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:04:01.833641 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:04:01.833648 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:04:01.833655 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:04:01.833662 | orchestrator | 2026-04-06 02:04:01.833670 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-06 02:04:01.833677 | orchestrator | Monday 06 April 2026 02:03:54 +0000 (0:00:01.204) 0:00:17.697 ********** 2026-04-06 02:04:01.833684 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:01.833692 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:01.833699 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:01.833706 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:01.833713 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:01.833720 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.833728 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:01.833736 | orchestrator | 2026-04-06 02:04:01.833743 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-06 02:04:01.833757 | orchestrator | Monday 06 April 2026 02:03:55 +0000 (0:00:01.120) 0:00:18.817 ********** 2026-04-06 02:04:01.833779 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:04:01.833786 | orchestrator | 2026-04-06 02:04:01.833792 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-06 02:04:01.833799 | orchestrator | Monday 06 April 2026 02:03:55 +0000 (0:00:00.411) 0:00:19.228 ********** 2026-04-06 02:04:01.833805 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:04:01.833820 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:04:01.833826 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:04:01.833832 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:04:01.833838 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:04:01.833845 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:04:01.833851 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:04:01.833857 | orchestrator | 2026-04-06 02:04:01.833863 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-06 02:04:01.833869 | orchestrator | Monday 06 April 2026 02:03:56 +0000 (0:00:01.358) 0:00:20.587 ********** 2026-04-06 02:04:01.833876 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.833883 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:01.833894 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:01.833905 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:01.833915 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:01.833926 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:01.833936 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:01.833946 | orchestrator | 2026-04-06 02:04:01.833957 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-06 02:04:01.833968 | orchestrator | Monday 06 April 2026 02:03:57 +0000 (0:00:00.263) 0:00:20.850 ********** 2026-04-06 02:04:01.833979 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.833991 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:01.834001 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:01.834065 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:01.834075 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:01.834081 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:01.834088 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:01.834094 | orchestrator | 2026-04-06 02:04:01.834100 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-06 02:04:01.834106 | orchestrator | Monday 06 April 2026 02:03:57 +0000 (0:00:00.279) 0:00:21.130 ********** 2026-04-06 02:04:01.834112 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.834119 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:01.834125 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:01.834131 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:01.834137 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:01.834143 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:01.834149 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:01.834155 | orchestrator | 2026-04-06 02:04:01.834161 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-06 02:04:01.834168 | orchestrator | Monday 06 April 2026 02:03:57 +0000 (0:00:00.293) 0:00:21.424 ********** 2026-04-06 02:04:01.834175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:04:01.834183 | orchestrator | 2026-04-06 02:04:01.834189 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-06 02:04:01.834196 | orchestrator | Monday 06 April 2026 02:03:58 +0000 (0:00:00.358) 0:00:21.782 ********** 2026-04-06 02:04:01.834202 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.834208 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:01.834222 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:01.834229 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:01.834240 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:01.834251 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:01.834261 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:01.834272 | orchestrator | 2026-04-06 02:04:01.834283 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-06 02:04:01.834293 | orchestrator | Monday 06 April 2026 02:03:58 +0000 (0:00:00.576) 0:00:22.358 ********** 2026-04-06 02:04:01.834303 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:04:01.834315 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:04:01.834326 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:04:01.834337 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:04:01.834348 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:04:01.834358 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:04:01.834368 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:04:01.834374 | orchestrator | 2026-04-06 02:04:01.834381 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-06 02:04:01.834387 | orchestrator | Monday 06 April 2026 02:03:58 +0000 (0:00:00.261) 0:00:22.620 ********** 2026-04-06 02:04:01.834393 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:01.834400 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.834406 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:01.834412 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:04:01.834419 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:01.834425 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:04:01.834431 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:04:01.834437 | orchestrator | 2026-04-06 02:04:01.834444 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-06 02:04:01.834450 | orchestrator | Monday 06 April 2026 02:04:00 +0000 (0:00:01.110) 0:00:23.730 ********** 2026-04-06 02:04:01.834456 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.834463 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:01.834469 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:01.834475 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:01.834481 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:01.834487 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:01.834493 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:01.834499 | orchestrator | 2026-04-06 02:04:01.834506 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-06 02:04:01.834512 | orchestrator | Monday 06 April 2026 02:04:00 +0000 (0:00:00.555) 0:00:24.286 ********** 2026-04-06 02:04:01.834518 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:01.834574 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:01.834586 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:01.834607 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:01.834628 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:04:45.539255 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:04:45.539353 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:04:45.539374 | orchestrator | 2026-04-06 02:04:45.539385 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-06 02:04:45.539396 | orchestrator | Monday 06 April 2026 02:04:01 +0000 (0:00:01.225) 0:00:25.511 ********** 2026-04-06 02:04:45.539406 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.539417 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.539427 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.539436 | orchestrator | changed: [testbed-manager] 2026-04-06 02:04:45.539446 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:04:45.539456 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:04:45.539465 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:04:45.539474 | orchestrator | 2026-04-06 02:04:45.539484 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-06 02:04:45.539494 | orchestrator | Monday 06 April 2026 02:04:18 +0000 (0:00:16.366) 0:00:41.877 ********** 2026-04-06 02:04:45.539503 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:45.539536 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.539543 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.539551 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.539560 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:45.539569 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:45.539578 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:45.539635 | orchestrator | 2026-04-06 02:04:45.539645 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-06 02:04:45.539655 | orchestrator | Monday 06 April 2026 02:04:18 +0000 (0:00:00.268) 0:00:42.146 ********** 2026-04-06 02:04:45.539662 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:45.539668 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.539674 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.539680 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.539686 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:45.539692 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:45.539699 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:45.539706 | orchestrator | 2026-04-06 02:04:45.539713 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-06 02:04:45.539720 | orchestrator | Monday 06 April 2026 02:04:18 +0000 (0:00:00.271) 0:00:42.417 ********** 2026-04-06 02:04:45.539727 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:45.539733 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.539739 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.539746 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.539752 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:45.539759 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:45.539766 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:45.539773 | orchestrator | 2026-04-06 02:04:45.539780 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-06 02:04:45.539786 | orchestrator | Monday 06 April 2026 02:04:19 +0000 (0:00:00.298) 0:00:42.715 ********** 2026-04-06 02:04:45.539796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:04:45.539804 | orchestrator | 2026-04-06 02:04:45.539811 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-06 02:04:45.539818 | orchestrator | Monday 06 April 2026 02:04:19 +0000 (0:00:00.330) 0:00:43.046 ********** 2026-04-06 02:04:45.539824 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.539830 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.539837 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.539843 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:45.539849 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:45.539856 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:45.539863 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:45.539870 | orchestrator | 2026-04-06 02:04:45.539876 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-06 02:04:45.539883 | orchestrator | Monday 06 April 2026 02:04:21 +0000 (0:00:01.908) 0:00:44.954 ********** 2026-04-06 02:04:45.539890 | orchestrator | changed: [testbed-manager] 2026-04-06 02:04:45.539897 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:04:45.539904 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:04:45.539911 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:04:45.539917 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:04:45.539923 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:04:45.539930 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:04:45.539936 | orchestrator | 2026-04-06 02:04:45.539943 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-06 02:04:45.539964 | orchestrator | Monday 06 April 2026 02:04:22 +0000 (0:00:01.075) 0:00:46.030 ********** 2026-04-06 02:04:45.539971 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.539979 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:45.539986 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.540001 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.540008 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:45.540015 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:45.540022 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:45.540029 | orchestrator | 2026-04-06 02:04:45.540037 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-06 02:04:45.540044 | orchestrator | Monday 06 April 2026 02:04:23 +0000 (0:00:00.818) 0:00:46.848 ********** 2026-04-06 02:04:45.540054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:04:45.540064 | orchestrator | 2026-04-06 02:04:45.540072 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-06 02:04:45.540081 | orchestrator | Monday 06 April 2026 02:04:23 +0000 (0:00:00.342) 0:00:47.191 ********** 2026-04-06 02:04:45.540089 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:04:45.540096 | orchestrator | changed: [testbed-manager] 2026-04-06 02:04:45.540103 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:04:45.540111 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:04:45.540118 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:04:45.540126 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:04:45.540133 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:04:45.540141 | orchestrator | 2026-04-06 02:04:45.540169 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-06 02:04:45.540176 | orchestrator | Monday 06 April 2026 02:04:24 +0000 (0:00:00.994) 0:00:48.185 ********** 2026-04-06 02:04:45.540184 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:04:45.540191 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:04:45.540198 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:04:45.540204 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:04:45.540212 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:04:45.540219 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:04:45.540226 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:04:45.540233 | orchestrator | 2026-04-06 02:04:45.540240 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-06 02:04:45.540247 | orchestrator | Monday 06 April 2026 02:04:24 +0000 (0:00:00.279) 0:00:48.465 ********** 2026-04-06 02:04:45.540255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:04:45.540262 | orchestrator | 2026-04-06 02:04:45.540270 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-06 02:04:45.540277 | orchestrator | Monday 06 April 2026 02:04:25 +0000 (0:00:00.343) 0:00:48.808 ********** 2026-04-06 02:04:45.540284 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.540291 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.540298 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.540306 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:45.540313 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:45.540320 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:45.540327 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:45.540335 | orchestrator | 2026-04-06 02:04:45.540342 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-06 02:04:45.540350 | orchestrator | Monday 06 April 2026 02:04:26 +0000 (0:00:01.842) 0:00:50.651 ********** 2026-04-06 02:04:45.540358 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:04:45.540365 | orchestrator | changed: [testbed-manager] 2026-04-06 02:04:45.540373 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:04:45.540382 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:04:45.540389 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:04:45.540397 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:04:45.540405 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:04:45.540421 | orchestrator | 2026-04-06 02:04:45.540429 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-06 02:04:45.540437 | orchestrator | Monday 06 April 2026 02:04:28 +0000 (0:00:01.149) 0:00:51.800 ********** 2026-04-06 02:04:45.540445 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:04:45.540453 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:04:45.540461 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:04:45.540469 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:04:45.540476 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:04:45.540484 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:04:45.540492 | orchestrator | changed: [testbed-manager] 2026-04-06 02:04:45.540499 | orchestrator | 2026-04-06 02:04:45.540508 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-06 02:04:45.540515 | orchestrator | Monday 06 April 2026 02:04:42 +0000 (0:00:14.701) 0:01:06.502 ********** 2026-04-06 02:04:45.540522 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:45.540528 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.540534 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.540541 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:45.540547 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:45.540555 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.540563 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:45.540572 | orchestrator | 2026-04-06 02:04:45.540580 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-06 02:04:45.540609 | orchestrator | Monday 06 April 2026 02:04:43 +0000 (0:00:00.892) 0:01:07.395 ********** 2026-04-06 02:04:45.540617 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:45.540625 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.540633 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.540641 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:45.540649 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.540657 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:45.540665 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:45.540673 | orchestrator | 2026-04-06 02:04:45.540681 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-06 02:04:45.540690 | orchestrator | Monday 06 April 2026 02:04:44 +0000 (0:00:00.955) 0:01:08.350 ********** 2026-04-06 02:04:45.540705 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:45.540713 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.540722 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.540730 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.540738 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:45.540747 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:45.540753 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:45.540760 | orchestrator | 2026-04-06 02:04:45.540766 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-06 02:04:45.540774 | orchestrator | Monday 06 April 2026 02:04:44 +0000 (0:00:00.265) 0:01:08.616 ********** 2026-04-06 02:04:45.540781 | orchestrator | ok: [testbed-manager] 2026-04-06 02:04:45.540787 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:04:45.540794 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:04:45.540800 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:04:45.540807 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:04:45.540814 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:04:45.540820 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:04:45.540826 | orchestrator | 2026-04-06 02:04:45.540833 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-06 02:04:45.540839 | orchestrator | Monday 06 April 2026 02:04:45 +0000 (0:00:00.262) 0:01:08.878 ********** 2026-04-06 02:04:45.540847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:04:45.540855 | orchestrator | 2026-04-06 02:04:45.540874 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-06 02:07:10.868230 | orchestrator | Monday 06 April 2026 02:04:45 +0000 (0:00:00.339) 0:01:09.218 ********** 2026-04-06 02:07:10.868335 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:10.868345 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:10.868351 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:10.868357 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:10.868363 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:10.868369 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:10.868377 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:10.868387 | orchestrator | 2026-04-06 02:07:10.868394 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-06 02:07:10.868400 | orchestrator | Monday 06 April 2026 02:04:47 +0000 (0:00:01.573) 0:01:10.792 ********** 2026-04-06 02:07:10.868405 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:07:10.868412 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:07:10.868418 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:07:10.868423 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:07:10.868429 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:07:10.868435 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:07:10.868440 | orchestrator | changed: [testbed-manager] 2026-04-06 02:07:10.868446 | orchestrator | 2026-04-06 02:07:10.868452 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-06 02:07:10.868458 | orchestrator | Monday 06 April 2026 02:04:47 +0000 (0:00:00.562) 0:01:11.355 ********** 2026-04-06 02:07:10.868464 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:10.868469 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:10.868475 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:10.868480 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:10.868486 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:10.868491 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:10.868497 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:10.868502 | orchestrator | 2026-04-06 02:07:10.868508 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-06 02:07:10.868514 | orchestrator | Monday 06 April 2026 02:04:47 +0000 (0:00:00.272) 0:01:11.627 ********** 2026-04-06 02:07:10.868519 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:10.868525 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:10.868530 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:10.868536 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:10.868541 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:10.868592 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:10.868599 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:10.868604 | orchestrator | 2026-04-06 02:07:10.868610 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-06 02:07:10.868616 | orchestrator | Monday 06 April 2026 02:04:49 +0000 (0:00:01.150) 0:01:12.777 ********** 2026-04-06 02:07:10.868621 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:07:10.868626 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:07:10.868632 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:07:10.868637 | orchestrator | changed: [testbed-manager] 2026-04-06 02:07:10.868643 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:07:10.868648 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:07:10.868654 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:07:10.868659 | orchestrator | 2026-04-06 02:07:10.868667 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-06 02:07:10.868673 | orchestrator | Monday 06 April 2026 02:04:50 +0000 (0:00:01.653) 0:01:14.431 ********** 2026-04-06 02:07:10.868678 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:10.868684 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:10.868689 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:10.868695 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:10.868700 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:10.868706 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:10.868711 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:10.868716 | orchestrator | 2026-04-06 02:07:10.868722 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-06 02:07:10.868745 | orchestrator | Monday 06 April 2026 02:04:53 +0000 (0:00:02.379) 0:01:16.811 ********** 2026-04-06 02:07:10.868751 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:10.868757 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:10.868762 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:10.868767 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:10.868773 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:10.868778 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:10.868784 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:10.868791 | orchestrator | 2026-04-06 02:07:10.868797 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-06 02:07:10.868804 | orchestrator | Monday 06 April 2026 02:05:34 +0000 (0:00:41.281) 0:01:58.092 ********** 2026-04-06 02:07:10.868810 | orchestrator | changed: [testbed-manager] 2026-04-06 02:07:10.868816 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:07:10.868823 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:07:10.868830 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:07:10.868837 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:07:10.868844 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:07:10.868850 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:07:10.868856 | orchestrator | 2026-04-06 02:07:10.868863 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-06 02:07:10.868870 | orchestrator | Monday 06 April 2026 02:06:53 +0000 (0:01:18.840) 0:03:16.932 ********** 2026-04-06 02:07:10.868877 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:10.868883 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:10.868889 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:10.868896 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:10.868902 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:10.868909 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:10.868915 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:10.868922 | orchestrator | 2026-04-06 02:07:10.868928 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-06 02:07:10.868934 | orchestrator | Monday 06 April 2026 02:06:55 +0000 (0:00:02.106) 0:03:19.039 ********** 2026-04-06 02:07:10.868940 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:10.868946 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:10.868953 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:10.868959 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:10.868965 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:10.868971 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:10.868978 | orchestrator | changed: [testbed-manager] 2026-04-06 02:07:10.868985 | orchestrator | 2026-04-06 02:07:10.868991 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-06 02:07:10.868997 | orchestrator | Monday 06 April 2026 02:07:09 +0000 (0:00:14.133) 0:03:33.172 ********** 2026-04-06 02:07:10.869029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-06 02:07:10.869050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-06 02:07:10.869064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-06 02:07:10.869072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-06 02:07:10.869078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-06 02:07:10.869085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-06 02:07:10.869091 | orchestrator | 2026-04-06 02:07:10.869098 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-06 02:07:10.869105 | orchestrator | Monday 06 April 2026 02:07:09 +0000 (0:00:00.462) 0:03:33.635 ********** 2026-04-06 02:07:10.869111 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-06 02:07:10.869118 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-06 02:07:10.869124 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:07:10.869130 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:07:10.869137 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-06 02:07:10.869143 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:07:10.869151 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-06 02:07:10.869157 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:07:10.869163 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-06 02:07:10.869168 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-06 02:07:10.869173 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-06 02:07:10.869179 | orchestrator | 2026-04-06 02:07:10.869184 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-06 02:07:10.869189 | orchestrator | Monday 06 April 2026 02:07:10 +0000 (0:00:00.782) 0:03:34.417 ********** 2026-04-06 02:07:10.869195 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-06 02:07:10.869202 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-06 02:07:10.869207 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-06 02:07:10.869213 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-06 02:07:10.869218 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-06 02:07:10.869227 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-06 02:07:18.473427 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-06 02:07:18.473530 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-06 02:07:18.473615 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-06 02:07:18.473628 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-06 02:07:18.473637 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-06 02:07:18.473646 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-06 02:07:18.473654 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-06 02:07:18.473663 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-06 02:07:18.473672 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-06 02:07:18.473681 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-06 02:07:18.473690 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-06 02:07:18.473699 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-06 02:07:18.473707 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-06 02:07:18.473716 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-06 02:07:18.473725 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-06 02:07:18.473737 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-06 02:07:18.473752 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-06 02:07:18.473768 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-06 02:07:18.473782 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-06 02:07:18.473797 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-06 02:07:18.473812 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:07:18.473828 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-06 02:07:18.473840 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-06 02:07:18.473849 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-06 02:07:18.473857 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-06 02:07:18.473866 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-06 02:07:18.473874 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-06 02:07:18.473883 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-06 02:07:18.473892 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:07:18.473900 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-06 02:07:18.473923 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-06 02:07:18.473932 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-06 02:07:18.473944 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-06 02:07:18.473954 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-06 02:07:18.473964 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-06 02:07:18.473983 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-06 02:07:18.473994 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:07:18.474005 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:07:18.474072 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-06 02:07:18.474085 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-06 02:07:18.474096 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-06 02:07:18.474106 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-06 02:07:18.474116 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-06 02:07:18.474144 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-06 02:07:18.474156 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-06 02:07:18.474166 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-06 02:07:18.474176 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-06 02:07:18.474186 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-06 02:07:18.474197 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-06 02:07:18.474207 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-06 02:07:18.474217 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-06 02:07:18.474227 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-06 02:07:18.474237 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-06 02:07:18.474247 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-06 02:07:18.474258 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-06 02:07:18.474268 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-06 02:07:18.474279 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-06 02:07:18.474289 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-06 02:07:18.474300 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-06 02:07:18.474308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-06 02:07:18.474317 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-06 02:07:18.474326 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-06 02:07:18.474334 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-06 02:07:18.474343 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-06 02:07:18.474352 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-06 02:07:18.474361 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-06 02:07:18.474369 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-06 02:07:18.474378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-06 02:07:18.474394 | orchestrator | 2026-04-06 02:07:18.474404 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-06 02:07:18.474413 | orchestrator | Monday 06 April 2026 02:07:15 +0000 (0:00:04.770) 0:03:39.188 ********** 2026-04-06 02:07:18.474422 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-06 02:07:18.474431 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-06 02:07:18.474440 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-06 02:07:18.474448 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-06 02:07:18.474462 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-06 02:07:18.474471 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-06 02:07:18.474480 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-06 02:07:18.474488 | orchestrator | 2026-04-06 02:07:18.474497 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-06 02:07:18.474506 | orchestrator | Monday 06 April 2026 02:07:16 +0000 (0:00:01.475) 0:03:40.663 ********** 2026-04-06 02:07:18.474515 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-06 02:07:18.474524 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:07:18.474532 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-06 02:07:18.474561 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-06 02:07:18.474571 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:07:18.474580 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:07:18.474588 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-06 02:07:18.474597 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:07:18.474606 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-06 02:07:18.474615 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-06 02:07:18.474629 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-06 02:07:33.103899 | orchestrator | 2026-04-06 02:07:33.104036 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-06 02:07:33.104055 | orchestrator | Monday 06 April 2026 02:07:18 +0000 (0:00:01.488) 0:03:42.151 ********** 2026-04-06 02:07:33.104102 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-06 02:07:33.104116 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:07:33.104134 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-06 02:07:33.104153 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-06 02:07:33.104170 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:07:33.104187 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-06 02:07:33.104207 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:07:33.104227 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:07:33.104246 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-06 02:07:33.104259 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-06 02:07:33.104285 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-06 02:07:33.104297 | orchestrator | 2026-04-06 02:07:33.104308 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-06 02:07:33.104345 | orchestrator | Monday 06 April 2026 02:07:20 +0000 (0:00:01.625) 0:03:43.777 ********** 2026-04-06 02:07:33.104357 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-06 02:07:33.104368 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:07:33.104379 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-06 02:07:33.104390 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:07:33.104403 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-06 02:07:33.104416 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:07:33.104428 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-06 02:07:33.104441 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:07:33.104453 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-06 02:07:33.104466 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-06 02:07:33.104479 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-06 02:07:33.104492 | orchestrator | 2026-04-06 02:07:33.104506 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-06 02:07:33.104518 | orchestrator | Monday 06 April 2026 02:07:20 +0000 (0:00:00.658) 0:03:44.436 ********** 2026-04-06 02:07:33.104614 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:07:33.104627 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:07:33.104640 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:07:33.104653 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:07:33.104666 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:07:33.104678 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:07:33.104690 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:07:33.104704 | orchestrator | 2026-04-06 02:07:33.104716 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-06 02:07:33.104729 | orchestrator | Monday 06 April 2026 02:07:21 +0000 (0:00:00.320) 0:03:44.756 ********** 2026-04-06 02:07:33.104740 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:33.104752 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:33.104762 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:33.104773 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:33.104782 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:33.104792 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:33.104801 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:33.104811 | orchestrator | 2026-04-06 02:07:33.104820 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-06 02:07:33.104830 | orchestrator | Monday 06 April 2026 02:07:27 +0000 (0:00:05.974) 0:03:50.730 ********** 2026-04-06 02:07:33.104840 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-06 02:07:33.104849 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-06 02:07:33.104859 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:07:33.104868 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:07:33.104878 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-06 02:07:33.104887 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-06 02:07:33.104897 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:07:33.104906 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:07:33.104917 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-06 02:07:33.104929 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-06 02:07:33.104967 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:07:33.104984 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:07:33.105000 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-06 02:07:33.105015 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:07:33.105029 | orchestrator | 2026-04-06 02:07:33.105057 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-06 02:07:33.105076 | orchestrator | Monday 06 April 2026 02:07:27 +0000 (0:00:00.353) 0:03:51.084 ********** 2026-04-06 02:07:33.105093 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-06 02:07:33.105109 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-06 02:07:33.105126 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-06 02:07:33.105165 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-06 02:07:33.105176 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-06 02:07:33.105185 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-06 02:07:33.105195 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-06 02:07:33.105204 | orchestrator | 2026-04-06 02:07:33.105214 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-06 02:07:33.105223 | orchestrator | Monday 06 April 2026 02:07:28 +0000 (0:00:01.180) 0:03:52.265 ********** 2026-04-06 02:07:33.105235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:07:33.105248 | orchestrator | 2026-04-06 02:07:33.105262 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-06 02:07:33.105278 | orchestrator | Monday 06 April 2026 02:07:29 +0000 (0:00:00.473) 0:03:52.738 ********** 2026-04-06 02:07:33.105291 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:33.105307 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:33.105323 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:33.105338 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:33.105354 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:33.105370 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:33.105385 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:33.105420 | orchestrator | 2026-04-06 02:07:33.105450 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-06 02:07:33.105468 | orchestrator | Monday 06 April 2026 02:07:30 +0000 (0:00:01.186) 0:03:53.925 ********** 2026-04-06 02:07:33.105485 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:33.105502 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:33.105520 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:33.105559 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:33.105576 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:33.105592 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:33.105608 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:33.105624 | orchestrator | 2026-04-06 02:07:33.105640 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-06 02:07:33.105655 | orchestrator | Monday 06 April 2026 02:07:30 +0000 (0:00:00.697) 0:03:54.623 ********** 2026-04-06 02:07:33.105671 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:07:33.105687 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:07:33.105703 | orchestrator | changed: [testbed-manager] 2026-04-06 02:07:33.105719 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:07:33.105735 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:07:33.105752 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:07:33.105767 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:07:33.105782 | orchestrator | 2026-04-06 02:07:33.105797 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-06 02:07:33.105813 | orchestrator | Monday 06 April 2026 02:07:31 +0000 (0:00:00.625) 0:03:55.249 ********** 2026-04-06 02:07:33.105830 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:33.105846 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:33.105863 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:33.105874 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:33.105883 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:33.105893 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:33.105902 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:33.105912 | orchestrator | 2026-04-06 02:07:33.105922 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-06 02:07:33.105945 | orchestrator | Monday 06 April 2026 02:07:32 +0000 (0:00:00.564) 0:03:55.813 ********** 2026-04-06 02:07:33.105971 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775439726.090866, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:33.105985 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775439697.1152558, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:33.105996 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775439715.904334, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:33.106102 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775439720.8087504, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131603 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775439730.6219013, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131719 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775439729.033187, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131736 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775439722.4263687, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131775 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131802 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131812 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131823 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131861 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131872 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131882 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 02:07:38.131901 | orchestrator | 2026-04-06 02:07:38.131913 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-06 02:07:38.131925 | orchestrator | Monday 06 April 2026 02:07:33 +0000 (0:00:00.965) 0:03:56.779 ********** 2026-04-06 02:07:38.131935 | orchestrator | changed: [testbed-manager] 2026-04-06 02:07:38.131946 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:07:38.131955 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:07:38.131965 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:07:38.131975 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:07:38.131985 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:07:38.131994 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:07:38.132004 | orchestrator | 2026-04-06 02:07:38.132014 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-06 02:07:38.132024 | orchestrator | Monday 06 April 2026 02:07:34 +0000 (0:00:01.070) 0:03:57.849 ********** 2026-04-06 02:07:38.132034 | orchestrator | changed: [testbed-manager] 2026-04-06 02:07:38.132046 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:07:38.132058 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:07:38.132071 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:07:38.132082 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:07:38.132094 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:07:38.132106 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:07:38.132117 | orchestrator | 2026-04-06 02:07:38.132135 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-06 02:07:38.132147 | orchestrator | Monday 06 April 2026 02:07:35 +0000 (0:00:01.166) 0:03:59.016 ********** 2026-04-06 02:07:38.132158 | orchestrator | changed: [testbed-manager] 2026-04-06 02:07:38.132170 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:07:38.132181 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:07:38.132193 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:07:38.132204 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:07:38.132215 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:07:38.132227 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:07:38.132239 | orchestrator | 2026-04-06 02:07:38.132250 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-06 02:07:38.132261 | orchestrator | Monday 06 April 2026 02:07:36 +0000 (0:00:01.122) 0:04:00.139 ********** 2026-04-06 02:07:38.132272 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:07:38.132284 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:07:38.132296 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:07:38.132307 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:07:38.132319 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:07:38.132330 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:07:38.132342 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:07:38.132354 | orchestrator | 2026-04-06 02:07:38.132368 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-06 02:07:38.132386 | orchestrator | Monday 06 April 2026 02:07:36 +0000 (0:00:00.316) 0:04:00.455 ********** 2026-04-06 02:07:38.132402 | orchestrator | ok: [testbed-manager] 2026-04-06 02:07:38.132417 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:07:38.132430 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:07:38.132444 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:07:38.132461 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:07:38.132477 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:07:38.132493 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:07:38.132505 | orchestrator | 2026-04-06 02:07:38.132577 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-06 02:07:38.132592 | orchestrator | Monday 06 April 2026 02:07:37 +0000 (0:00:00.865) 0:04:01.321 ********** 2026-04-06 02:07:38.132605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:07:38.132626 | orchestrator | 2026-04-06 02:07:38.132637 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-06 02:07:38.132656 | orchestrator | Monday 06 April 2026 02:07:38 +0000 (0:00:00.491) 0:04:01.813 ********** 2026-04-06 02:08:57.516009 | orchestrator | ok: [testbed-manager] 2026-04-06 02:08:57.516145 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:08:57.516164 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:08:57.516176 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:08:57.516188 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:08:57.516199 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:08:57.516210 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:08:57.516222 | orchestrator | 2026-04-06 02:08:57.516234 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-06 02:08:57.516247 | orchestrator | Monday 06 April 2026 02:07:45 +0000 (0:00:07.850) 0:04:09.664 ********** 2026-04-06 02:08:57.516270 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:08:57.516281 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:08:57.516293 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:08:57.516304 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:08:57.516315 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:08:57.516326 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:08:57.516337 | orchestrator | ok: [testbed-manager] 2026-04-06 02:08:57.516348 | orchestrator | 2026-04-06 02:08:57.516359 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-06 02:08:57.516370 | orchestrator | Monday 06 April 2026 02:07:47 +0000 (0:00:01.204) 0:04:10.868 ********** 2026-04-06 02:08:57.516381 | orchestrator | ok: [testbed-manager] 2026-04-06 02:08:57.516392 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:08:57.516403 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:08:57.516414 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:08:57.516425 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:08:57.516435 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:08:57.516496 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:08:57.516513 | orchestrator | 2026-04-06 02:08:57.516527 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-06 02:08:57.516540 | orchestrator | Monday 06 April 2026 02:07:48 +0000 (0:00:01.201) 0:04:12.070 ********** 2026-04-06 02:08:57.516554 | orchestrator | ok: [testbed-manager] 2026-04-06 02:08:57.516568 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:08:57.516580 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:08:57.516592 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:08:57.516605 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:08:57.516617 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:08:57.516631 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:08:57.516661 | orchestrator | 2026-04-06 02:08:57.516685 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-06 02:08:57.516699 | orchestrator | Monday 06 April 2026 02:07:48 +0000 (0:00:00.356) 0:04:12.427 ********** 2026-04-06 02:08:57.516712 | orchestrator | ok: [testbed-manager] 2026-04-06 02:08:57.516725 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:08:57.516737 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:08:57.516750 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:08:57.516763 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:08:57.516776 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:08:57.516790 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:08:57.516802 | orchestrator | 2026-04-06 02:08:57.516816 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-06 02:08:57.516828 | orchestrator | Monday 06 April 2026 02:07:49 +0000 (0:00:00.388) 0:04:12.815 ********** 2026-04-06 02:08:57.516841 | orchestrator | ok: [testbed-manager] 2026-04-06 02:08:57.516854 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:08:57.516867 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:08:57.516905 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:08:57.516918 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:08:57.516929 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:08:57.516940 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:08:57.516951 | orchestrator | 2026-04-06 02:08:57.516963 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-06 02:08:57.516983 | orchestrator | Monday 06 April 2026 02:07:49 +0000 (0:00:00.346) 0:04:13.162 ********** 2026-04-06 02:08:57.516999 | orchestrator | ok: [testbed-manager] 2026-04-06 02:08:57.517016 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:08:57.517037 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:08:57.517056 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:08:57.517076 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:08:57.517092 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:08:57.517103 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:08:57.517113 | orchestrator | 2026-04-06 02:08:57.517125 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-06 02:08:57.517136 | orchestrator | Monday 06 April 2026 02:07:55 +0000 (0:00:05.581) 0:04:18.744 ********** 2026-04-06 02:08:57.517149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:08:57.517162 | orchestrator | 2026-04-06 02:08:57.517173 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-06 02:08:57.517184 | orchestrator | Monday 06 April 2026 02:07:55 +0000 (0:00:00.583) 0:04:19.327 ********** 2026-04-06 02:08:57.517195 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-06 02:08:57.517206 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-06 02:08:57.517217 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:08:57.517228 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-06 02:08:57.517239 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-06 02:08:57.517268 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-06 02:08:57.517280 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-06 02:08:57.517291 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:08:57.517301 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-06 02:08:57.517312 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-06 02:08:57.517335 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:08:57.517346 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-06 02:08:57.517357 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-06 02:08:57.517368 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:08:57.517379 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:08:57.517390 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-06 02:08:57.517429 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-06 02:08:57.517480 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:08:57.517502 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-06 02:08:57.517520 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-06 02:08:57.517536 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:08:57.517554 | orchestrator | 2026-04-06 02:08:57.517572 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-06 02:08:57.517590 | orchestrator | Monday 06 April 2026 02:07:56 +0000 (0:00:00.429) 0:04:19.756 ********** 2026-04-06 02:08:57.517608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:08:57.517625 | orchestrator | 2026-04-06 02:08:57.517644 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-06 02:08:57.517677 | orchestrator | Monday 06 April 2026 02:07:56 +0000 (0:00:00.470) 0:04:20.227 ********** 2026-04-06 02:08:57.517696 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-06 02:08:57.517712 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:08:57.517729 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-06 02:08:57.517745 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-06 02:08:57.517761 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:08:57.517776 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:08:57.517793 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-06 02:08:57.517810 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-06 02:08:57.517826 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:08:57.517844 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-06 02:08:57.517862 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:08:57.517878 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:08:57.517894 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-06 02:08:57.517911 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:08:57.517927 | orchestrator | 2026-04-06 02:08:57.517945 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-06 02:08:57.517962 | orchestrator | Monday 06 April 2026 02:07:56 +0000 (0:00:00.420) 0:04:20.648 ********** 2026-04-06 02:08:57.517980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:08:57.517999 | orchestrator | 2026-04-06 02:08:57.518094 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-06 02:08:57.518118 | orchestrator | Monday 06 April 2026 02:07:57 +0000 (0:00:00.472) 0:04:21.121 ********** 2026-04-06 02:08:57.518138 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:08:57.518158 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:08:57.518172 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:08:57.518183 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:08:57.518204 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:08:57.518215 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:08:57.518226 | orchestrator | changed: [testbed-manager] 2026-04-06 02:08:57.518237 | orchestrator | 2026-04-06 02:08:57.518248 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-06 02:08:57.518259 | orchestrator | Monday 06 April 2026 02:08:32 +0000 (0:00:35.280) 0:04:56.402 ********** 2026-04-06 02:08:57.518270 | orchestrator | changed: [testbed-manager] 2026-04-06 02:08:57.518280 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:08:57.518291 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:08:57.518302 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:08:57.518312 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:08:57.518323 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:08:57.518333 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:08:57.518344 | orchestrator | 2026-04-06 02:08:57.518355 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-06 02:08:57.518365 | orchestrator | Monday 06 April 2026 02:08:41 +0000 (0:00:08.512) 0:05:04.915 ********** 2026-04-06 02:08:57.518377 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:08:57.518387 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:08:57.518398 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:08:57.518409 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:08:57.518419 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:08:57.518430 | orchestrator | changed: [testbed-manager] 2026-04-06 02:08:57.518474 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:08:57.518493 | orchestrator | 2026-04-06 02:08:57.518512 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-06 02:08:57.518546 | orchestrator | Monday 06 April 2026 02:08:49 +0000 (0:00:08.123) 0:05:13.039 ********** 2026-04-06 02:08:57.518565 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:08:57.518576 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:08:57.518587 | orchestrator | ok: [testbed-manager] 2026-04-06 02:08:57.518598 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:08:57.518609 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:08:57.518619 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:08:57.518630 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:08:57.518641 | orchestrator | 2026-04-06 02:08:57.518652 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-06 02:08:57.518663 | orchestrator | Monday 06 April 2026 02:08:51 +0000 (0:00:01.834) 0:05:14.873 ********** 2026-04-06 02:08:57.518674 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:08:57.518684 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:08:57.518695 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:08:57.518706 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:08:57.518716 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:08:57.518727 | orchestrator | changed: [testbed-manager] 2026-04-06 02:08:57.518738 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:08:57.518749 | orchestrator | 2026-04-06 02:08:57.518776 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-06 02:09:09.482669 | orchestrator | Monday 06 April 2026 02:08:57 +0000 (0:00:06.318) 0:05:21.191 ********** 2026-04-06 02:09:09.482774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:09:09.482789 | orchestrator | 2026-04-06 02:09:09.482798 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-06 02:09:09.482806 | orchestrator | Monday 06 April 2026 02:08:57 +0000 (0:00:00.461) 0:05:21.653 ********** 2026-04-06 02:09:09.482814 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:09:09.482822 | orchestrator | changed: [testbed-manager] 2026-04-06 02:09:09.482830 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:09:09.482837 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:09:09.482843 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:09:09.482851 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:09:09.482858 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:09:09.482865 | orchestrator | 2026-04-06 02:09:09.482872 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-06 02:09:09.482880 | orchestrator | Monday 06 April 2026 02:08:58 +0000 (0:00:00.758) 0:05:22.412 ********** 2026-04-06 02:09:09.482888 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:09:09.482896 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:09:09.482903 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:09:09.482911 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:09:09.482920 | orchestrator | ok: [testbed-manager] 2026-04-06 02:09:09.482926 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:09:09.482934 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:09:09.482942 | orchestrator | 2026-04-06 02:09:09.482950 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-06 02:09:09.482957 | orchestrator | Monday 06 April 2026 02:09:00 +0000 (0:00:01.839) 0:05:24.251 ********** 2026-04-06 02:09:09.482965 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:09:09.482972 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:09:09.482979 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:09:09.482987 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:09:09.482994 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:09:09.483003 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:09:09.483011 | orchestrator | changed: [testbed-manager] 2026-04-06 02:09:09.483019 | orchestrator | 2026-04-06 02:09:09.483027 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-06 02:09:09.483035 | orchestrator | Monday 06 April 2026 02:09:01 +0000 (0:00:00.784) 0:05:25.035 ********** 2026-04-06 02:09:09.483060 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:09:09.483065 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:09:09.483070 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:09:09.483074 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:09:09.483079 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:09:09.483084 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:09:09.483088 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:09:09.483093 | orchestrator | 2026-04-06 02:09:09.483098 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-06 02:09:09.483103 | orchestrator | Monday 06 April 2026 02:09:01 +0000 (0:00:00.347) 0:05:25.383 ********** 2026-04-06 02:09:09.483107 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:09:09.483112 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:09:09.483116 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:09:09.483134 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:09:09.483142 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:09:09.483148 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:09:09.483155 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:09:09.483162 | orchestrator | 2026-04-06 02:09:09.483172 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-06 02:09:09.483180 | orchestrator | Monday 06 April 2026 02:09:02 +0000 (0:00:00.497) 0:05:25.881 ********** 2026-04-06 02:09:09.483193 | orchestrator | ok: [testbed-manager] 2026-04-06 02:09:09.483200 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:09:09.483207 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:09:09.483215 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:09:09.483222 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:09:09.483229 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:09:09.483237 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:09:09.483244 | orchestrator | 2026-04-06 02:09:09.483251 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-06 02:09:09.483259 | orchestrator | Monday 06 April 2026 02:09:02 +0000 (0:00:00.337) 0:05:26.218 ********** 2026-04-06 02:09:09.483268 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:09:09.483276 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:09:09.483283 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:09:09.483290 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:09:09.483298 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:09:09.483306 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:09:09.483313 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:09:09.483321 | orchestrator | 2026-04-06 02:09:09.483329 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-06 02:09:09.483338 | orchestrator | Monday 06 April 2026 02:09:02 +0000 (0:00:00.336) 0:05:26.554 ********** 2026-04-06 02:09:09.483346 | orchestrator | ok: [testbed-manager] 2026-04-06 02:09:09.483353 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:09:09.483360 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:09:09.483367 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:09:09.483375 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:09:09.483382 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:09:09.483390 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:09:09.483398 | orchestrator | 2026-04-06 02:09:09.483406 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-06 02:09:09.483414 | orchestrator | Monday 06 April 2026 02:09:03 +0000 (0:00:00.353) 0:05:26.908 ********** 2026-04-06 02:09:09.483421 | orchestrator | ok: [testbed-manager] =>  2026-04-06 02:09:09.483479 | orchestrator |  docker_version: 5:27.5.1 2026-04-06 02:09:09.483488 | orchestrator | ok: [testbed-node-3] =>  2026-04-06 02:09:09.483496 | orchestrator |  docker_version: 5:27.5.1 2026-04-06 02:09:09.483503 | orchestrator | ok: [testbed-node-4] =>  2026-04-06 02:09:09.483509 | orchestrator |  docker_version: 5:27.5.1 2026-04-06 02:09:09.483515 | orchestrator | ok: [testbed-node-5] =>  2026-04-06 02:09:09.483520 | orchestrator |  docker_version: 5:27.5.1 2026-04-06 02:09:09.483549 | orchestrator | ok: [testbed-node-0] =>  2026-04-06 02:09:09.483564 | orchestrator |  docker_version: 5:27.5.1 2026-04-06 02:09:09.483572 | orchestrator | ok: [testbed-node-1] =>  2026-04-06 02:09:09.483580 | orchestrator |  docker_version: 5:27.5.1 2026-04-06 02:09:09.483587 | orchestrator | ok: [testbed-node-2] =>  2026-04-06 02:09:09.483595 | orchestrator |  docker_version: 5:27.5.1 2026-04-06 02:09:09.483603 | orchestrator | 2026-04-06 02:09:09.483610 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-06 02:09:09.483618 | orchestrator | Monday 06 April 2026 02:09:03 +0000 (0:00:00.307) 0:05:27.216 ********** 2026-04-06 02:09:09.483626 | orchestrator | ok: [testbed-manager] =>  2026-04-06 02:09:09.483634 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-06 02:09:09.483642 | orchestrator | ok: [testbed-node-3] =>  2026-04-06 02:09:09.483650 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-06 02:09:09.483658 | orchestrator | ok: [testbed-node-4] =>  2026-04-06 02:09:09.483666 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-06 02:09:09.483673 | orchestrator | ok: [testbed-node-5] =>  2026-04-06 02:09:09.483681 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-06 02:09:09.483689 | orchestrator | ok: [testbed-node-0] =>  2026-04-06 02:09:09.483696 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-06 02:09:09.483705 | orchestrator | ok: [testbed-node-1] =>  2026-04-06 02:09:09.483711 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-06 02:09:09.483718 | orchestrator | ok: [testbed-node-2] =>  2026-04-06 02:09:09.483726 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-06 02:09:09.483733 | orchestrator | 2026-04-06 02:09:09.483740 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-06 02:09:09.483748 | orchestrator | Monday 06 April 2026 02:09:03 +0000 (0:00:00.322) 0:05:27.538 ********** 2026-04-06 02:09:09.483755 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:09:09.483762 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:09:09.483769 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:09:09.483777 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:09:09.483784 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:09:09.483792 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:09:09.483799 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:09:09.483806 | orchestrator | 2026-04-06 02:09:09.483814 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-06 02:09:09.483822 | orchestrator | Monday 06 April 2026 02:09:04 +0000 (0:00:00.307) 0:05:27.846 ********** 2026-04-06 02:09:09.483829 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:09:09.483836 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:09:09.483844 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:09:09.483851 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:09:09.483859 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:09:09.483865 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:09:09.483872 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:09:09.483880 | orchestrator | 2026-04-06 02:09:09.483887 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-06 02:09:09.483892 | orchestrator | Monday 06 April 2026 02:09:04 +0000 (0:00:00.298) 0:05:28.145 ********** 2026-04-06 02:09:09.483899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:09:09.483906 | orchestrator | 2026-04-06 02:09:09.483916 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-06 02:09:09.483921 | orchestrator | Monday 06 April 2026 02:09:04 +0000 (0:00:00.480) 0:05:28.625 ********** 2026-04-06 02:09:09.483928 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:09:09.483936 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:09:09.483943 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:09:09.483951 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:09:09.483958 | orchestrator | ok: [testbed-manager] 2026-04-06 02:09:09.483971 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:09:09.483978 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:09:09.483985 | orchestrator | 2026-04-06 02:09:09.483993 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-06 02:09:09.484001 | orchestrator | Monday 06 April 2026 02:09:06 +0000 (0:00:01.076) 0:05:29.701 ********** 2026-04-06 02:09:09.484008 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:09:09.484016 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:09:09.484024 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:09:09.484031 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:09:09.484039 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:09:09.484046 | orchestrator | ok: [testbed-manager] 2026-04-06 02:09:09.484054 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:09:09.484061 | orchestrator | 2026-04-06 02:09:09.484069 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-06 02:09:09.484078 | orchestrator | Monday 06 April 2026 02:09:09 +0000 (0:00:03.034) 0:05:32.736 ********** 2026-04-06 02:09:09.484086 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-06 02:09:09.484094 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-06 02:09:09.484101 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-06 02:09:09.484109 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-06 02:09:09.484117 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-06 02:09:09.484124 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-06 02:09:09.484132 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:09:09.484139 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-06 02:09:09.484147 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-06 02:09:09.484154 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-06 02:09:09.484161 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:09:09.484169 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-06 02:09:09.484183 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-06 02:09:09.484192 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-06 02:09:09.484198 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:09:09.484206 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-06 02:09:09.484222 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-06 02:10:11.396983 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-06 02:10:11.397099 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:11.397117 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-06 02:10:11.397129 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-06 02:10:11.397140 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-06 02:10:11.397151 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:11.397162 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:11.397173 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-06 02:10:11.397184 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-06 02:10:11.397195 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-06 02:10:11.397206 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:11.397218 | orchestrator | 2026-04-06 02:10:11.397231 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-06 02:10:11.397243 | orchestrator | Monday 06 April 2026 02:09:09 +0000 (0:00:00.664) 0:05:33.401 ********** 2026-04-06 02:10:11.397255 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:11.397266 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.397277 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.397288 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.397300 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.397311 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.397347 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.397359 | orchestrator | 2026-04-06 02:10:11.397370 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-06 02:10:11.397473 | orchestrator | Monday 06 April 2026 02:09:16 +0000 (0:00:06.793) 0:05:40.194 ********** 2026-04-06 02:10:11.397505 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.397527 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.397546 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.397565 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:11.397582 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.397595 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.397609 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.397621 | orchestrator | 2026-04-06 02:10:11.397634 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-06 02:10:11.397653 | orchestrator | Monday 06 April 2026 02:09:17 +0000 (0:00:01.110) 0:05:41.305 ********** 2026-04-06 02:10:11.397671 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:11.397689 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.397707 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.397724 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.397740 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.397757 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.397774 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.397793 | orchestrator | 2026-04-06 02:10:11.397810 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-06 02:10:11.397827 | orchestrator | Monday 06 April 2026 02:09:26 +0000 (0:00:08.400) 0:05:49.706 ********** 2026-04-06 02:10:11.397843 | orchestrator | changed: [testbed-manager] 2026-04-06 02:10:11.397860 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.397877 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.397895 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.397913 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.397931 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.397949 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.397968 | orchestrator | 2026-04-06 02:10:11.397987 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-06 02:10:11.398006 | orchestrator | Monday 06 April 2026 02:09:29 +0000 (0:00:03.556) 0:05:53.263 ********** 2026-04-06 02:10:11.398094 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:11.398107 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.398119 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.398130 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.398141 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.398151 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.398162 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.398173 | orchestrator | 2026-04-06 02:10:11.398185 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-06 02:10:11.398204 | orchestrator | Monday 06 April 2026 02:09:30 +0000 (0:00:01.353) 0:05:54.616 ********** 2026-04-06 02:10:11.398223 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:11.398241 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.398259 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.398275 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.398294 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.398312 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.398332 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.398349 | orchestrator | 2026-04-06 02:10:11.398368 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-06 02:10:11.398408 | orchestrator | Monday 06 April 2026 02:09:32 +0000 (0:00:01.557) 0:05:56.174 ********** 2026-04-06 02:10:11.398426 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:11.398439 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:11.398450 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:11.398461 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:11.398485 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:11.398496 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:11.398507 | orchestrator | changed: [testbed-manager] 2026-04-06 02:10:11.398518 | orchestrator | 2026-04-06 02:10:11.398529 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-06 02:10:11.398540 | orchestrator | Monday 06 April 2026 02:09:33 +0000 (0:00:00.655) 0:05:56.829 ********** 2026-04-06 02:10:11.398550 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:11.398561 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.398572 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.398582 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.398593 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.398603 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.398614 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.398624 | orchestrator | 2026-04-06 02:10:11.398635 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-06 02:10:11.398669 | orchestrator | Monday 06 April 2026 02:09:42 +0000 (0:00:09.555) 0:06:06.385 ********** 2026-04-06 02:10:11.398688 | orchestrator | changed: [testbed-manager] 2026-04-06 02:10:11.398706 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.398724 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.398740 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.398759 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.398777 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.398793 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.398811 | orchestrator | 2026-04-06 02:10:11.398830 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-06 02:10:11.398846 | orchestrator | Monday 06 April 2026 02:09:43 +0000 (0:00:00.915) 0:06:07.300 ********** 2026-04-06 02:10:11.398862 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:11.398880 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.398897 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.398914 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.398929 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.398947 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.398962 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.398980 | orchestrator | 2026-04-06 02:10:11.398995 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-06 02:10:11.399012 | orchestrator | Monday 06 April 2026 02:09:53 +0000 (0:00:09.791) 0:06:17.092 ********** 2026-04-06 02:10:11.399029 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:11.399047 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.399064 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.399082 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.399098 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.399114 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.399129 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.399145 | orchestrator | 2026-04-06 02:10:11.399162 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-06 02:10:11.399179 | orchestrator | Monday 06 April 2026 02:10:04 +0000 (0:00:10.945) 0:06:28.038 ********** 2026-04-06 02:10:11.399195 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-06 02:10:11.399213 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-06 02:10:11.399232 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-06 02:10:11.399248 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-06 02:10:11.399265 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-06 02:10:11.399281 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-06 02:10:11.399297 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-06 02:10:11.399313 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-06 02:10:11.399329 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-06 02:10:11.399365 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-06 02:10:11.399410 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-06 02:10:11.399490 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-06 02:10:11.399511 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-06 02:10:11.399527 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-06 02:10:11.399545 | orchestrator | 2026-04-06 02:10:11.399562 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-06 02:10:11.399579 | orchestrator | Monday 06 April 2026 02:10:05 +0000 (0:00:01.222) 0:06:29.261 ********** 2026-04-06 02:10:11.399604 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:11.399624 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:11.399640 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:11.399660 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:11.399697 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:11.399729 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:11.399746 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:11.399762 | orchestrator | 2026-04-06 02:10:11.399779 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-06 02:10:11.399796 | orchestrator | Monday 06 April 2026 02:10:06 +0000 (0:00:00.544) 0:06:29.805 ********** 2026-04-06 02:10:11.399813 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:11.399829 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:11.399845 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:11.399862 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:11.399879 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:11.399896 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:11.399912 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:11.399930 | orchestrator | 2026-04-06 02:10:11.399947 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-06 02:10:11.399966 | orchestrator | Monday 06 April 2026 02:10:10 +0000 (0:00:04.112) 0:06:33.917 ********** 2026-04-06 02:10:11.399984 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:11.400000 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:11.400017 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:11.400033 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:11.400050 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:11.400067 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:11.400084 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:11.400101 | orchestrator | 2026-04-06 02:10:11.400119 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-06 02:10:11.400137 | orchestrator | Monday 06 April 2026 02:10:10 +0000 (0:00:00.564) 0:06:34.482 ********** 2026-04-06 02:10:11.400154 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-06 02:10:11.400172 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-06 02:10:11.400189 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:11.400206 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-06 02:10:11.400232 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-06 02:10:11.400251 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:11.400269 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-06 02:10:11.400287 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-06 02:10:11.400306 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:11.400348 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-06 02:10:32.584894 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-06 02:10:32.585007 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:32.585023 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-06 02:10:32.585032 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-06 02:10:32.585040 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:32.585071 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-06 02:10:32.585079 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-06 02:10:32.585086 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:32.585093 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-06 02:10:32.585101 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-06 02:10:32.585107 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:32.585115 | orchestrator | 2026-04-06 02:10:32.585124 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-06 02:10:32.585133 | orchestrator | Monday 06 April 2026 02:10:11 +0000 (0:00:00.967) 0:06:35.449 ********** 2026-04-06 02:10:32.585141 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:32.585149 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:32.585156 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:32.585162 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:32.585169 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:32.585177 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:32.585182 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:32.585187 | orchestrator | 2026-04-06 02:10:32.585193 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-06 02:10:32.585198 | orchestrator | Monday 06 April 2026 02:10:12 +0000 (0:00:00.599) 0:06:36.049 ********** 2026-04-06 02:10:32.585202 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:32.585207 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:32.585212 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:32.585216 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:32.585221 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:32.585225 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:32.585230 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:32.585234 | orchestrator | 2026-04-06 02:10:32.585238 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-06 02:10:32.585242 | orchestrator | Monday 06 April 2026 02:10:12 +0000 (0:00:00.624) 0:06:36.674 ********** 2026-04-06 02:10:32.585246 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:32.585250 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:32.585254 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:32.585258 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:32.585263 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:32.585267 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:32.585271 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:32.585275 | orchestrator | 2026-04-06 02:10:32.585279 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-06 02:10:32.585283 | orchestrator | Monday 06 April 2026 02:10:13 +0000 (0:00:00.607) 0:06:37.281 ********** 2026-04-06 02:10:32.585287 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:32.585292 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:10:32.585296 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:10:32.585300 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:10:32.585304 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:32.585308 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:32.585312 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:32.585316 | orchestrator | 2026-04-06 02:10:32.585321 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-06 02:10:32.585325 | orchestrator | Monday 06 April 2026 02:10:15 +0000 (0:00:01.932) 0:06:39.214 ********** 2026-04-06 02:10:32.585330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:10:32.585336 | orchestrator | 2026-04-06 02:10:32.585341 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-06 02:10:32.585345 | orchestrator | Monday 06 April 2026 02:10:16 +0000 (0:00:01.012) 0:06:40.226 ********** 2026-04-06 02:10:32.585396 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:32.585404 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:32.585408 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:32.585412 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:32.585416 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:32.585420 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:32.585424 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:32.585428 | orchestrator | 2026-04-06 02:10:32.585433 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-06 02:10:32.585438 | orchestrator | Monday 06 April 2026 02:10:17 +0000 (0:00:01.102) 0:06:41.329 ********** 2026-04-06 02:10:32.585443 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:32.585448 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:32.585452 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:32.585457 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:32.585462 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:32.585466 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:32.585471 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:32.585476 | orchestrator | 2026-04-06 02:10:32.585481 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-06 02:10:32.585486 | orchestrator | Monday 06 April 2026 02:10:18 +0000 (0:00:00.921) 0:06:42.250 ********** 2026-04-06 02:10:32.585491 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:32.585496 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:32.585500 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:32.585505 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:32.585510 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:32.585515 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:32.585519 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:32.585524 | orchestrator | 2026-04-06 02:10:32.585529 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-06 02:10:32.585549 | orchestrator | Monday 06 April 2026 02:10:20 +0000 (0:00:01.658) 0:06:43.909 ********** 2026-04-06 02:10:32.585554 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:32.585559 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:10:32.585564 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:10:32.585569 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:10:32.585574 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:32.585578 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:32.585583 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:32.585588 | orchestrator | 2026-04-06 02:10:32.585592 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-06 02:10:32.585597 | orchestrator | Monday 06 April 2026 02:10:21 +0000 (0:00:01.462) 0:06:45.372 ********** 2026-04-06 02:10:32.585602 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:32.585606 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:32.585612 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:32.585616 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:32.585621 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:32.585626 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:32.585630 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:32.585635 | orchestrator | 2026-04-06 02:10:32.585640 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-06 02:10:32.585645 | orchestrator | Monday 06 April 2026 02:10:23 +0000 (0:00:01.375) 0:06:46.747 ********** 2026-04-06 02:10:32.585650 | orchestrator | changed: [testbed-manager] 2026-04-06 02:10:32.585654 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:32.585659 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:32.585664 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:32.585668 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:32.585673 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:32.585678 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:32.585683 | orchestrator | 2026-04-06 02:10:32.585692 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-06 02:10:32.585697 | orchestrator | Monday 06 April 2026 02:10:24 +0000 (0:00:01.459) 0:06:48.206 ********** 2026-04-06 02:10:32.585702 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:10:32.585707 | orchestrator | 2026-04-06 02:10:32.585712 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-06 02:10:32.585717 | orchestrator | Monday 06 April 2026 02:10:25 +0000 (0:00:01.150) 0:06:49.357 ********** 2026-04-06 02:10:32.585721 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:10:32.585726 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:32.585731 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:10:32.585735 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:10:32.585740 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:32.585744 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:32.585749 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:32.585754 | orchestrator | 2026-04-06 02:10:32.585759 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-06 02:10:32.585763 | orchestrator | Monday 06 April 2026 02:10:27 +0000 (0:00:01.432) 0:06:50.790 ********** 2026-04-06 02:10:32.585768 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:32.585773 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:10:32.585778 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:10:32.585782 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:10:32.585787 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:32.585803 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:32.585808 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:32.585813 | orchestrator | 2026-04-06 02:10:32.585817 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-06 02:10:32.585822 | orchestrator | Monday 06 April 2026 02:10:28 +0000 (0:00:01.411) 0:06:52.202 ********** 2026-04-06 02:10:32.585826 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:32.585830 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:10:32.585834 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:10:32.585839 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:10:32.585843 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:32.585847 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:32.585851 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:32.585855 | orchestrator | 2026-04-06 02:10:32.585859 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-06 02:10:32.585863 | orchestrator | Monday 06 April 2026 02:10:29 +0000 (0:00:01.202) 0:06:53.404 ********** 2026-04-06 02:10:32.585867 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:32.585871 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:10:32.585875 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:10:32.585879 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:10:32.585883 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:32.585887 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:32.585891 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:32.585896 | orchestrator | 2026-04-06 02:10:32.585900 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-06 02:10:32.585904 | orchestrator | Monday 06 April 2026 02:10:31 +0000 (0:00:01.500) 0:06:54.904 ********** 2026-04-06 02:10:32.585908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:10:32.585912 | orchestrator | 2026-04-06 02:10:32.585916 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-06 02:10:32.585920 | orchestrator | Monday 06 April 2026 02:10:32 +0000 (0:00:01.002) 0:06:55.907 ********** 2026-04-06 02:10:32.585925 | orchestrator | 2026-04-06 02:10:32.585929 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-06 02:10:32.585937 | orchestrator | Monday 06 April 2026 02:10:32 +0000 (0:00:00.050) 0:06:55.957 ********** 2026-04-06 02:10:32.585941 | orchestrator | 2026-04-06 02:10:32.585945 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-06 02:10:32.585949 | orchestrator | Monday 06 April 2026 02:10:32 +0000 (0:00:00.046) 0:06:56.004 ********** 2026-04-06 02:10:32.585953 | orchestrator | 2026-04-06 02:10:32.585957 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-06 02:10:32.585964 | orchestrator | Monday 06 April 2026 02:10:32 +0000 (0:00:00.053) 0:06:56.057 ********** 2026-04-06 02:10:59.871449 | orchestrator | 2026-04-06 02:10:59.871570 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-06 02:10:59.871588 | orchestrator | Monday 06 April 2026 02:10:32 +0000 (0:00:00.049) 0:06:56.107 ********** 2026-04-06 02:10:59.871601 | orchestrator | 2026-04-06 02:10:59.871613 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-06 02:10:59.871625 | orchestrator | Monday 06 April 2026 02:10:32 +0000 (0:00:00.046) 0:06:56.153 ********** 2026-04-06 02:10:59.871636 | orchestrator | 2026-04-06 02:10:59.871648 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-06 02:10:59.871660 | orchestrator | Monday 06 April 2026 02:10:32 +0000 (0:00:00.062) 0:06:56.216 ********** 2026-04-06 02:10:59.871672 | orchestrator | 2026-04-06 02:10:59.871683 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-06 02:10:59.871696 | orchestrator | Monday 06 April 2026 02:10:32 +0000 (0:00:00.044) 0:06:56.261 ********** 2026-04-06 02:10:59.871709 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:59.871722 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:59.871735 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:59.871746 | orchestrator | 2026-04-06 02:10:59.871758 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-06 02:10:59.871770 | orchestrator | Monday 06 April 2026 02:10:33 +0000 (0:00:01.213) 0:06:57.475 ********** 2026-04-06 02:10:59.871782 | orchestrator | changed: [testbed-manager] 2026-04-06 02:10:59.871795 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:59.871807 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:59.871819 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:59.871830 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:59.871840 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:59.871851 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:59.871862 | orchestrator | 2026-04-06 02:10:59.871872 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-06 02:10:59.871884 | orchestrator | Monday 06 April 2026 02:10:35 +0000 (0:00:01.565) 0:06:59.041 ********** 2026-04-06 02:10:59.871895 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:59.871906 | orchestrator | changed: [testbed-manager] 2026-04-06 02:10:59.871917 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:59.871927 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:59.871939 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:59.871950 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:59.871961 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:59.871972 | orchestrator | 2026-04-06 02:10:59.871983 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-06 02:10:59.871994 | orchestrator | Monday 06 April 2026 02:10:36 +0000 (0:00:01.255) 0:07:00.296 ********** 2026-04-06 02:10:59.872006 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:59.872017 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:59.872029 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:59.872041 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:59.872053 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:59.872064 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:59.872075 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:59.872086 | orchestrator | 2026-04-06 02:10:59.872098 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-06 02:10:59.872109 | orchestrator | Monday 06 April 2026 02:10:38 +0000 (0:00:02.352) 0:07:02.648 ********** 2026-04-06 02:10:59.872219 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:59.872233 | orchestrator | 2026-04-06 02:10:59.872243 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-06 02:10:59.872254 | orchestrator | Monday 06 April 2026 02:10:39 +0000 (0:00:00.135) 0:07:02.784 ********** 2026-04-06 02:10:59.872263 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:59.872273 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:59.872283 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:59.872294 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:59.872304 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:59.872313 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:10:59.872322 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:59.872333 | orchestrator | 2026-04-06 02:10:59.872420 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-06 02:10:59.872432 | orchestrator | Monday 06 April 2026 02:10:40 +0000 (0:00:01.115) 0:07:03.899 ********** 2026-04-06 02:10:59.872443 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:59.872453 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:59.872462 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:59.872472 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:59.872481 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:59.872490 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:59.872501 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:59.872511 | orchestrator | 2026-04-06 02:10:59.872521 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-06 02:10:59.872530 | orchestrator | Monday 06 April 2026 02:10:40 +0000 (0:00:00.619) 0:07:04.519 ********** 2026-04-06 02:10:59.872542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:10:59.872554 | orchestrator | 2026-04-06 02:10:59.872563 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-06 02:10:59.872573 | orchestrator | Monday 06 April 2026 02:10:42 +0000 (0:00:01.254) 0:07:05.773 ********** 2026-04-06 02:10:59.872583 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:59.872594 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:10:59.872604 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:10:59.872611 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:10:59.872617 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:59.872624 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:59.872630 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:59.872637 | orchestrator | 2026-04-06 02:10:59.872648 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-06 02:10:59.872658 | orchestrator | Monday 06 April 2026 02:10:43 +0000 (0:00:00.943) 0:07:06.717 ********** 2026-04-06 02:10:59.872668 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-06 02:10:59.872701 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-06 02:10:59.872713 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-06 02:10:59.872724 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-06 02:10:59.872736 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-06 02:10:59.872746 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-06 02:10:59.872756 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-06 02:10:59.872767 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-06 02:10:59.872774 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-06 02:10:59.872780 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-06 02:10:59.872786 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-06 02:10:59.872792 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-06 02:10:59.872809 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-06 02:10:59.872815 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-06 02:10:59.872821 | orchestrator | 2026-04-06 02:10:59.872827 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-06 02:10:59.872834 | orchestrator | Monday 06 April 2026 02:10:45 +0000 (0:00:02.498) 0:07:09.215 ********** 2026-04-06 02:10:59.872840 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:59.872846 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:59.872852 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:59.872858 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:59.872864 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:59.872870 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:59.872877 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:59.872883 | orchestrator | 2026-04-06 02:10:59.872889 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-06 02:10:59.872895 | orchestrator | Monday 06 April 2026 02:10:46 +0000 (0:00:00.835) 0:07:10.051 ********** 2026-04-06 02:10:59.872904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:10:59.872912 | orchestrator | 2026-04-06 02:10:59.872918 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-06 02:10:59.872924 | orchestrator | Monday 06 April 2026 02:10:47 +0000 (0:00:01.003) 0:07:11.055 ********** 2026-04-06 02:10:59.872930 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:59.872936 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:10:59.872942 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:10:59.872949 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:10:59.872955 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:59.872961 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:59.872967 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:59.872973 | orchestrator | 2026-04-06 02:10:59.872979 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-06 02:10:59.872986 | orchestrator | Monday 06 April 2026 02:10:48 +0000 (0:00:00.883) 0:07:11.938 ********** 2026-04-06 02:10:59.873044 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:59.873052 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:10:59.873058 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:10:59.873064 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:10:59.873071 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:59.873077 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:59.873083 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:59.873089 | orchestrator | 2026-04-06 02:10:59.873096 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-06 02:10:59.873102 | orchestrator | Monday 06 April 2026 02:10:49 +0000 (0:00:01.119) 0:07:13.058 ********** 2026-04-06 02:10:59.873109 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:59.873115 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:59.873121 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:59.873127 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:59.873134 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:59.873140 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:59.873146 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:59.873152 | orchestrator | 2026-04-06 02:10:59.873159 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-06 02:10:59.873165 | orchestrator | Monday 06 April 2026 02:10:50 +0000 (0:00:00.746) 0:07:13.804 ********** 2026-04-06 02:10:59.873171 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:10:59.873177 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:59.873183 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:10:59.873189 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:10:59.873196 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:10:59.873207 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:10:59.873213 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:10:59.873219 | orchestrator | 2026-04-06 02:10:59.873225 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-06 02:10:59.873232 | orchestrator | Monday 06 April 2026 02:10:51 +0000 (0:00:01.542) 0:07:15.347 ********** 2026-04-06 02:10:59.873238 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:10:59.873244 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:10:59.873250 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:10:59.873257 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:10:59.873263 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:10:59.873269 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:10:59.873275 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:10:59.873281 | orchestrator | 2026-04-06 02:10:59.873288 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-06 02:10:59.873294 | orchestrator | Monday 06 April 2026 02:10:52 +0000 (0:00:00.554) 0:07:15.901 ********** 2026-04-06 02:10:59.873300 | orchestrator | ok: [testbed-manager] 2026-04-06 02:10:59.873307 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:10:59.873313 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:10:59.873319 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:10:59.873325 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:10:59.873331 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:10:59.873363 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:11:33.521425 | orchestrator | 2026-04-06 02:11:33.521577 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-06 02:11:33.521601 | orchestrator | Monday 06 April 2026 02:10:59 +0000 (0:00:07.642) 0:07:23.544 ********** 2026-04-06 02:11:33.521622 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:11:33.521634 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.521642 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:11:33.521649 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:11:33.521656 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:11:33.521662 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:11:33.521669 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:11:33.521675 | orchestrator | 2026-04-06 02:11:33.521682 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-06 02:11:33.521689 | orchestrator | Monday 06 April 2026 02:11:01 +0000 (0:00:01.755) 0:07:25.299 ********** 2026-04-06 02:11:33.521695 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.521702 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:11:33.521708 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:11:33.521714 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:11:33.521720 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:11:33.521727 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:11:33.521738 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:11:33.521748 | orchestrator | 2026-04-06 02:11:33.521757 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-06 02:11:33.521767 | orchestrator | Monday 06 April 2026 02:11:03 +0000 (0:00:01.816) 0:07:27.116 ********** 2026-04-06 02:11:33.521776 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.521786 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:11:33.521796 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:11:33.521806 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:11:33.521816 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:11:33.521827 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:11:33.521836 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:11:33.521846 | orchestrator | 2026-04-06 02:11:33.521857 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-06 02:11:33.521868 | orchestrator | Monday 06 April 2026 02:11:05 +0000 (0:00:01.707) 0:07:28.823 ********** 2026-04-06 02:11:33.521878 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.521888 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:11:33.521899 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:11:33.521935 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:11:33.521946 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:11:33.521957 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:11:33.521967 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:11:33.521978 | orchestrator | 2026-04-06 02:11:33.521987 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-06 02:11:33.521995 | orchestrator | Monday 06 April 2026 02:11:06 +0000 (0:00:00.904) 0:07:29.727 ********** 2026-04-06 02:11:33.522003 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:11:33.522010 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:11:33.522064 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:11:33.522072 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:11:33.522079 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:11:33.522086 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:11:33.522093 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:11:33.522101 | orchestrator | 2026-04-06 02:11:33.522108 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-06 02:11:33.522116 | orchestrator | Monday 06 April 2026 02:11:07 +0000 (0:00:01.172) 0:07:30.900 ********** 2026-04-06 02:11:33.522124 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:11:33.522131 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:11:33.522137 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:11:33.522143 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:11:33.522149 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:11:33.522156 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:11:33.522162 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:11:33.522168 | orchestrator | 2026-04-06 02:11:33.522174 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-06 02:11:33.522181 | orchestrator | Monday 06 April 2026 02:11:07 +0000 (0:00:00.591) 0:07:31.491 ********** 2026-04-06 02:11:33.522187 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.522208 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:11:33.522215 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:11:33.522221 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:11:33.522227 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:11:33.522233 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:11:33.522240 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:11:33.522246 | orchestrator | 2026-04-06 02:11:33.522252 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-06 02:11:33.522258 | orchestrator | Monday 06 April 2026 02:11:08 +0000 (0:00:00.559) 0:07:32.050 ********** 2026-04-06 02:11:33.522265 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.522271 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:11:33.522278 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:11:33.522285 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:11:33.522291 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:11:33.522297 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:11:33.522303 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:11:33.522309 | orchestrator | 2026-04-06 02:11:33.522352 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-06 02:11:33.522359 | orchestrator | Monday 06 April 2026 02:11:08 +0000 (0:00:00.584) 0:07:32.634 ********** 2026-04-06 02:11:33.522366 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.522372 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:11:33.522378 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:11:33.522384 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:11:33.522390 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:11:33.522396 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:11:33.522403 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:11:33.522419 | orchestrator | 2026-04-06 02:11:33.522426 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-06 02:11:33.522432 | orchestrator | Monday 06 April 2026 02:11:09 +0000 (0:00:00.835) 0:07:33.470 ********** 2026-04-06 02:11:33.522445 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.522451 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:11:33.522466 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:11:33.522472 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:11:33.522478 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:11:33.522484 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:11:33.522490 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:11:33.522497 | orchestrator | 2026-04-06 02:11:33.522520 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-06 02:11:33.522527 | orchestrator | Monday 06 April 2026 02:11:15 +0000 (0:00:05.699) 0:07:39.170 ********** 2026-04-06 02:11:33.522533 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:11:33.522539 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:11:33.522545 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:11:33.522552 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:11:33.522558 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:11:33.522564 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:11:33.522570 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:11:33.522576 | orchestrator | 2026-04-06 02:11:33.522583 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-06 02:11:33.522589 | orchestrator | Monday 06 April 2026 02:11:16 +0000 (0:00:00.603) 0:07:39.774 ********** 2026-04-06 02:11:33.522598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:11:33.522607 | orchestrator | 2026-04-06 02:11:33.522613 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-06 02:11:33.522619 | orchestrator | Monday 06 April 2026 02:11:17 +0000 (0:00:01.203) 0:07:40.977 ********** 2026-04-06 02:11:33.522625 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.522632 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:11:33.522638 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:11:33.522644 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:11:33.522650 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:11:33.522657 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:11:33.522663 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:11:33.522669 | orchestrator | 2026-04-06 02:11:33.522675 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-06 02:11:33.522681 | orchestrator | Monday 06 April 2026 02:11:19 +0000 (0:00:01.997) 0:07:42.975 ********** 2026-04-06 02:11:33.522687 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.522694 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:11:33.522700 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:11:33.522706 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:11:33.522712 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:11:33.522718 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:11:33.522725 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:11:33.522731 | orchestrator | 2026-04-06 02:11:33.522737 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-06 02:11:33.522743 | orchestrator | Monday 06 April 2026 02:11:20 +0000 (0:00:01.194) 0:07:44.169 ********** 2026-04-06 02:11:33.522749 | orchestrator | ok: [testbed-manager] 2026-04-06 02:11:33.522756 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:11:33.522762 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:11:33.522768 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:11:33.522774 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:11:33.522780 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:11:33.522786 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:11:33.522792 | orchestrator | 2026-04-06 02:11:33.522799 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-06 02:11:33.522805 | orchestrator | Monday 06 April 2026 02:11:21 +0000 (0:00:00.922) 0:07:45.092 ********** 2026-04-06 02:11:33.522816 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-06 02:11:33.522824 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-06 02:11:33.522837 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-06 02:11:33.522843 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-06 02:11:33.522849 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-06 02:11:33.522856 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-06 02:11:33.522862 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-06 02:11:33.522869 | orchestrator | 2026-04-06 02:11:33.522875 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-06 02:11:33.522882 | orchestrator | Monday 06 April 2026 02:11:23 +0000 (0:00:01.970) 0:07:47.063 ********** 2026-04-06 02:11:33.522888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:11:33.522894 | orchestrator | 2026-04-06 02:11:33.522901 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-06 02:11:33.522907 | orchestrator | Monday 06 April 2026 02:11:24 +0000 (0:00:00.976) 0:07:48.039 ********** 2026-04-06 02:11:33.522937 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:11:33.522948 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:11:33.522958 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:11:33.522968 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:11:33.522978 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:11:33.522988 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:11:33.522998 | orchestrator | changed: [testbed-manager] 2026-04-06 02:11:33.523008 | orchestrator | 2026-04-06 02:11:33.523025 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-06 02:12:06.803375 | orchestrator | Monday 06 April 2026 02:11:33 +0000 (0:00:09.155) 0:07:57.195 ********** 2026-04-06 02:12:06.803498 | orchestrator | ok: [testbed-manager] 2026-04-06 02:12:06.803517 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:12:06.803529 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:12:06.803540 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:12:06.803551 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:12:06.803562 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:12:06.803573 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:12:06.803584 | orchestrator | 2026-04-06 02:12:06.803596 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-06 02:12:06.803608 | orchestrator | Monday 06 April 2026 02:11:36 +0000 (0:00:02.734) 0:07:59.930 ********** 2026-04-06 02:12:06.803620 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:12:06.803630 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:12:06.803641 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:12:06.803652 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:12:06.803663 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:12:06.803674 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:12:06.803685 | orchestrator | 2026-04-06 02:12:06.803696 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-06 02:12:06.803707 | orchestrator | Monday 06 April 2026 02:11:37 +0000 (0:00:01.278) 0:08:01.208 ********** 2026-04-06 02:12:06.803718 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:06.803730 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:06.803741 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:06.803752 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:06.803763 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:06.803799 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:06.803814 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:06.803827 | orchestrator | 2026-04-06 02:12:06.803840 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-06 02:12:06.803853 | orchestrator | 2026-04-06 02:12:06.803866 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-06 02:12:06.803880 | orchestrator | Monday 06 April 2026 02:11:38 +0000 (0:00:01.300) 0:08:02.509 ********** 2026-04-06 02:12:06.803893 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:12:06.803906 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:12:06.803918 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:12:06.803931 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:12:06.803944 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:12:06.803956 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:12:06.803968 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:12:06.803981 | orchestrator | 2026-04-06 02:12:06.803993 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-06 02:12:06.804004 | orchestrator | 2026-04-06 02:12:06.804015 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-06 02:12:06.804026 | orchestrator | Monday 06 April 2026 02:11:39 +0000 (0:00:00.786) 0:08:03.295 ********** 2026-04-06 02:12:06.804037 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:06.804048 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:06.804059 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:06.804070 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:06.804080 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:06.804091 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:06.804102 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:06.804113 | orchestrator | 2026-04-06 02:12:06.804124 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-06 02:12:06.804148 | orchestrator | Monday 06 April 2026 02:11:40 +0000 (0:00:01.331) 0:08:04.627 ********** 2026-04-06 02:12:06.804160 | orchestrator | ok: [testbed-manager] 2026-04-06 02:12:06.804171 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:12:06.804182 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:12:06.804192 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:12:06.804203 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:12:06.804214 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:12:06.804224 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:12:06.804235 | orchestrator | 2026-04-06 02:12:06.804246 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-06 02:12:06.804257 | orchestrator | Monday 06 April 2026 02:11:42 +0000 (0:00:01.517) 0:08:06.145 ********** 2026-04-06 02:12:06.804268 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:12:06.804279 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:12:06.804313 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:12:06.804326 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:12:06.804337 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:12:06.804348 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:12:06.804359 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:12:06.804369 | orchestrator | 2026-04-06 02:12:06.804380 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-06 02:12:06.804391 | orchestrator | Monday 06 April 2026 02:11:43 +0000 (0:00:00.599) 0:08:06.744 ********** 2026-04-06 02:12:06.804403 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:12:06.804415 | orchestrator | 2026-04-06 02:12:06.804427 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-06 02:12:06.804437 | orchestrator | Monday 06 April 2026 02:11:44 +0000 (0:00:01.099) 0:08:07.844 ********** 2026-04-06 02:12:06.804450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:12:06.804472 | orchestrator | 2026-04-06 02:12:06.804484 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-06 02:12:06.804494 | orchestrator | Monday 06 April 2026 02:11:45 +0000 (0:00:00.937) 0:08:08.781 ********** 2026-04-06 02:12:06.804505 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:06.804516 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:06.804527 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:06.804538 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:06.804549 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:06.804559 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:06.804570 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:06.804581 | orchestrator | 2026-04-06 02:12:06.804611 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-06 02:12:06.804623 | orchestrator | Monday 06 April 2026 02:11:54 +0000 (0:00:09.258) 0:08:18.040 ********** 2026-04-06 02:12:06.804634 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:06.804644 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:06.804655 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:06.804666 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:06.804677 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:06.804688 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:06.804699 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:06.804710 | orchestrator | 2026-04-06 02:12:06.804721 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-06 02:12:06.804732 | orchestrator | Monday 06 April 2026 02:11:55 +0000 (0:00:01.117) 0:08:19.157 ********** 2026-04-06 02:12:06.804743 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:06.804754 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:06.804764 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:06.804775 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:06.804786 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:06.804796 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:06.804807 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:06.804818 | orchestrator | 2026-04-06 02:12:06.804829 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-06 02:12:06.804840 | orchestrator | Monday 06 April 2026 02:11:56 +0000 (0:00:01.340) 0:08:20.498 ********** 2026-04-06 02:12:06.804851 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:06.804861 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:06.804872 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:06.804883 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:06.804894 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:06.804904 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:06.804915 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:06.804926 | orchestrator | 2026-04-06 02:12:06.804937 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-06 02:12:06.804948 | orchestrator | Monday 06 April 2026 02:11:58 +0000 (0:00:02.039) 0:08:22.538 ********** 2026-04-06 02:12:06.804959 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:06.804969 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:06.804980 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:06.804991 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:06.805002 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:06.805013 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:06.805024 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:06.805034 | orchestrator | 2026-04-06 02:12:06.805045 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-06 02:12:06.805056 | orchestrator | Monday 06 April 2026 02:12:00 +0000 (0:00:01.288) 0:08:23.827 ********** 2026-04-06 02:12:06.805067 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:06.805078 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:06.805096 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:06.805107 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:06.805118 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:06.805129 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:06.805140 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:06.805150 | orchestrator | 2026-04-06 02:12:06.805161 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-06 02:12:06.805172 | orchestrator | 2026-04-06 02:12:06.805189 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-06 02:12:06.805200 | orchestrator | Monday 06 April 2026 02:12:01 +0000 (0:00:01.204) 0:08:25.031 ********** 2026-04-06 02:12:06.805211 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:12:06.805223 | orchestrator | 2026-04-06 02:12:06.805233 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-06 02:12:06.805253 | orchestrator | Monday 06 April 2026 02:12:02 +0000 (0:00:00.922) 0:08:25.954 ********** 2026-04-06 02:12:06.805271 | orchestrator | ok: [testbed-manager] 2026-04-06 02:12:06.805288 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:12:06.805331 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:12:06.805349 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:12:06.805366 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:12:06.805384 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:12:06.805402 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:12:06.805422 | orchestrator | 2026-04-06 02:12:06.805441 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-06 02:12:06.805458 | orchestrator | Monday 06 April 2026 02:12:03 +0000 (0:00:01.124) 0:08:27.078 ********** 2026-04-06 02:12:06.805476 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:06.805496 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:06.805515 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:06.805534 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:06.805546 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:06.805557 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:06.805567 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:06.805578 | orchestrator | 2026-04-06 02:12:06.805598 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-06 02:12:06.805616 | orchestrator | Monday 06 April 2026 02:12:04 +0000 (0:00:01.358) 0:08:28.437 ********** 2026-04-06 02:12:06.805634 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:12:06.805651 | orchestrator | 2026-04-06 02:12:06.805669 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-06 02:12:06.805688 | orchestrator | Monday 06 April 2026 02:12:05 +0000 (0:00:01.168) 0:08:29.605 ********** 2026-04-06 02:12:06.805707 | orchestrator | ok: [testbed-manager] 2026-04-06 02:12:06.805726 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:12:06.805743 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:12:06.805759 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:12:06.805771 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:12:06.805781 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:12:06.805792 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:12:06.805803 | orchestrator | 2026-04-06 02:12:06.805824 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-06 02:12:08.563363 | orchestrator | Monday 06 April 2026 02:12:06 +0000 (0:00:00.876) 0:08:30.482 ********** 2026-04-06 02:12:08.563473 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:08.563494 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:08.563508 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:08.563523 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:08.563535 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:08.563547 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:08.563560 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:08.563607 | orchestrator | 2026-04-06 02:12:08.563624 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:12:08.563639 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-06 02:12:08.563669 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-06 02:12:08.563691 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-06 02:12:08.563703 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-06 02:12:08.563716 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-04-06 02:12:08.563727 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-06 02:12:08.563739 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-06 02:12:08.563751 | orchestrator | 2026-04-06 02:12:08.563763 | orchestrator | 2026-04-06 02:12:08.563775 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:12:08.563788 | orchestrator | Monday 06 April 2026 02:12:07 +0000 (0:00:01.185) 0:08:31.667 ********** 2026-04-06 02:12:08.563800 | orchestrator | =============================================================================== 2026-04-06 02:12:08.563812 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.84s 2026-04-06 02:12:08.563824 | orchestrator | osism.commons.packages : Download required packages -------------------- 41.28s 2026-04-06 02:12:08.563836 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.28s 2026-04-06 02:12:08.563844 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.37s 2026-04-06 02:12:08.563851 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 14.70s 2026-04-06 02:12:08.563872 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.13s 2026-04-06 02:12:08.563881 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.95s 2026-04-06 02:12:08.563891 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.79s 2026-04-06 02:12:08.563900 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.56s 2026-04-06 02:12:08.563908 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.26s 2026-04-06 02:12:08.563916 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.16s 2026-04-06 02:12:08.563923 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.51s 2026-04-06 02:12:08.563931 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.40s 2026-04-06 02:12:08.563939 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.12s 2026-04-06 02:12:08.563947 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.85s 2026-04-06 02:12:08.563955 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.64s 2026-04-06 02:12:08.563962 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.79s 2026-04-06 02:12:08.563970 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.32s 2026-04-06 02:12:08.563978 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.97s 2026-04-06 02:12:08.563985 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.70s 2026-04-06 02:12:08.940526 | orchestrator | + osism apply fail2ban 2026-04-06 02:12:22.226210 | orchestrator | 2026-04-06 02:12:22 | INFO  | Task ca22ad07-dcca-4d64-8e5d-b2a0b903c612 (fail2ban) was prepared for execution. 2026-04-06 02:12:22.226389 | orchestrator | 2026-04-06 02:12:22 | INFO  | It takes a moment until task ca22ad07-dcca-4d64-8e5d-b2a0b903c612 (fail2ban) has been started and output is visible here. 2026-04-06 02:12:45.333462 | orchestrator | 2026-04-06 02:12:45.333622 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-06 02:12:45.333636 | orchestrator | 2026-04-06 02:12:45.333643 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-06 02:12:45.333650 | orchestrator | Monday 06 April 2026 02:12:27 +0000 (0:00:00.341) 0:00:00.341 ********** 2026-04-06 02:12:45.333658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:12:45.333666 | orchestrator | 2026-04-06 02:12:45.333672 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-06 02:12:45.333677 | orchestrator | Monday 06 April 2026 02:12:28 +0000 (0:00:01.253) 0:00:01.595 ********** 2026-04-06 02:12:45.333683 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:45.333690 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:45.333695 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:45.333700 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:45.333706 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:45.333711 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:45.333717 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:45.333723 | orchestrator | 2026-04-06 02:12:45.333729 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-06 02:12:45.333734 | orchestrator | Monday 06 April 2026 02:12:40 +0000 (0:00:11.422) 0:00:13.018 ********** 2026-04-06 02:12:45.333740 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:45.333745 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:45.333751 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:45.333756 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:45.333762 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:45.333767 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:45.333773 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:45.333778 | orchestrator | 2026-04-06 02:12:45.333785 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-06 02:12:45.333793 | orchestrator | Monday 06 April 2026 02:12:41 +0000 (0:00:01.510) 0:00:14.528 ********** 2026-04-06 02:12:45.333802 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:12:45.333816 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:12:45.333828 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:12:45.333836 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:12:45.333844 | orchestrator | ok: [testbed-manager] 2026-04-06 02:12:45.333853 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:12:45.333862 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:12:45.333871 | orchestrator | 2026-04-06 02:12:45.333880 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-06 02:12:45.333888 | orchestrator | Monday 06 April 2026 02:12:43 +0000 (0:00:01.495) 0:00:16.023 ********** 2026-04-06 02:12:45.333897 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:12:45.333905 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:12:45.333913 | orchestrator | changed: [testbed-manager] 2026-04-06 02:12:45.333922 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:12:45.333931 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:12:45.333939 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:12:45.333948 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:12:45.333956 | orchestrator | 2026-04-06 02:12:45.333964 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:12:45.333973 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:12:45.334009 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:12:45.334077 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:12:45.334087 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:12:45.334097 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:12:45.334108 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:12:45.334117 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:12:45.334127 | orchestrator | 2026-04-06 02:12:45.334136 | orchestrator | 2026-04-06 02:12:45.334146 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:12:45.334156 | orchestrator | Monday 06 April 2026 02:12:44 +0000 (0:00:01.654) 0:00:17.677 ********** 2026-04-06 02:12:45.334166 | orchestrator | =============================================================================== 2026-04-06 02:12:45.334175 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.42s 2026-04-06 02:12:45.334185 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.65s 2026-04-06 02:12:45.334194 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.51s 2026-04-06 02:12:45.334204 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.50s 2026-04-06 02:12:45.334213 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.25s 2026-04-06 02:12:45.694444 | orchestrator | + osism apply network 2026-04-06 02:12:57.997513 | orchestrator | 2026-04-06 02:12:57 | INFO  | Task de746a73-1007-41c8-bafb-8b158d312a02 (network) was prepared for execution. 2026-04-06 02:12:57.997617 | orchestrator | 2026-04-06 02:12:57 | INFO  | It takes a moment until task de746a73-1007-41c8-bafb-8b158d312a02 (network) has been started and output is visible here. 2026-04-06 02:13:29.116113 | orchestrator | 2026-04-06 02:13:29.116217 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-06 02:13:29.116231 | orchestrator | 2026-04-06 02:13:29.116238 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-06 02:13:29.116306 | orchestrator | Monday 06 April 2026 02:13:02 +0000 (0:00:00.285) 0:00:00.286 ********** 2026-04-06 02:13:29.116315 | orchestrator | ok: [testbed-manager] 2026-04-06 02:13:29.116322 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:13:29.116327 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:13:29.116333 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:13:29.116338 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:13:29.116343 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:13:29.116348 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:13:29.116355 | orchestrator | 2026-04-06 02:13:29.116363 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-06 02:13:29.116372 | orchestrator | Monday 06 April 2026 02:13:03 +0000 (0:00:00.793) 0:00:01.079 ********** 2026-04-06 02:13:29.116382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:13:29.116393 | orchestrator | 2026-04-06 02:13:29.116401 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-06 02:13:29.116410 | orchestrator | Monday 06 April 2026 02:13:04 +0000 (0:00:01.424) 0:00:02.504 ********** 2026-04-06 02:13:29.116435 | orchestrator | ok: [testbed-manager] 2026-04-06 02:13:29.116441 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:13:29.116446 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:13:29.116451 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:13:29.116456 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:13:29.116462 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:13:29.116467 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:13:29.116472 | orchestrator | 2026-04-06 02:13:29.116477 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-06 02:13:29.116482 | orchestrator | Monday 06 April 2026 02:13:07 +0000 (0:00:02.292) 0:00:04.796 ********** 2026-04-06 02:13:29.116488 | orchestrator | ok: [testbed-manager] 2026-04-06 02:13:29.116493 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:13:29.116498 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:13:29.116504 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:13:29.116509 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:13:29.116514 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:13:29.116519 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:13:29.116524 | orchestrator | 2026-04-06 02:13:29.116529 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-06 02:13:29.116535 | orchestrator | Monday 06 April 2026 02:13:09 +0000 (0:00:01.884) 0:00:06.681 ********** 2026-04-06 02:13:29.116541 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-06 02:13:29.116550 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-06 02:13:29.116558 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-06 02:13:29.116566 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-06 02:13:29.116574 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-06 02:13:29.116582 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-06 02:13:29.116591 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-06 02:13:29.116600 | orchestrator | 2026-04-06 02:13:29.116625 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-06 02:13:29.116632 | orchestrator | Monday 06 April 2026 02:13:10 +0000 (0:00:01.064) 0:00:07.745 ********** 2026-04-06 02:13:29.116640 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 02:13:29.116647 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 02:13:29.116652 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 02:13:29.116657 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 02:13:29.116662 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 02:13:29.116667 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 02:13:29.116672 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 02:13:29.116679 | orchestrator | 2026-04-06 02:13:29.116685 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-06 02:13:29.116691 | orchestrator | Monday 06 April 2026 02:13:13 +0000 (0:00:03.733) 0:00:11.479 ********** 2026-04-06 02:13:29.116697 | orchestrator | changed: [testbed-manager] 2026-04-06 02:13:29.116703 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:13:29.116711 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:13:29.116719 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:13:29.116728 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:13:29.116736 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:13:29.116745 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:13:29.116754 | orchestrator | 2026-04-06 02:13:29.116763 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-06 02:13:29.116771 | orchestrator | Monday 06 April 2026 02:13:15 +0000 (0:00:01.637) 0:00:13.117 ********** 2026-04-06 02:13:29.116780 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 02:13:29.116789 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 02:13:29.116799 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 02:13:29.116807 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 02:13:29.116815 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 02:13:29.116830 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 02:13:29.116836 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 02:13:29.116842 | orchestrator | 2026-04-06 02:13:29.116848 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-06 02:13:29.116854 | orchestrator | Monday 06 April 2026 02:13:17 +0000 (0:00:01.985) 0:00:15.102 ********** 2026-04-06 02:13:29.116860 | orchestrator | ok: [testbed-manager] 2026-04-06 02:13:29.116866 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:13:29.116872 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:13:29.116878 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:13:29.116884 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:13:29.116890 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:13:29.116896 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:13:29.116901 | orchestrator | 2026-04-06 02:13:29.116907 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-06 02:13:29.116926 | orchestrator | Monday 06 April 2026 02:13:18 +0000 (0:00:01.168) 0:00:16.270 ********** 2026-04-06 02:13:29.116933 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:13:29.116939 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:13:29.116945 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:13:29.116950 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:13:29.116956 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:13:29.116962 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:13:29.116968 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:13:29.116974 | orchestrator | 2026-04-06 02:13:29.116980 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-06 02:13:29.116986 | orchestrator | Monday 06 April 2026 02:13:19 +0000 (0:00:00.772) 0:00:17.043 ********** 2026-04-06 02:13:29.116992 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:13:29.116998 | orchestrator | ok: [testbed-manager] 2026-04-06 02:13:29.117007 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:13:29.117015 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:13:29.117023 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:13:29.117032 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:13:29.117041 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:13:29.117048 | orchestrator | 2026-04-06 02:13:29.117056 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-06 02:13:29.117061 | orchestrator | Monday 06 April 2026 02:13:21 +0000 (0:00:02.199) 0:00:19.242 ********** 2026-04-06 02:13:29.117067 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:13:29.117072 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:13:29.117077 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:13:29.117082 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:13:29.117087 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:13:29.117092 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:13:29.117098 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-04-06 02:13:29.117105 | orchestrator | 2026-04-06 02:13:29.117110 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-06 02:13:29.117115 | orchestrator | Monday 06 April 2026 02:13:22 +0000 (0:00:01.029) 0:00:20.272 ********** 2026-04-06 02:13:29.117120 | orchestrator | ok: [testbed-manager] 2026-04-06 02:13:29.117125 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:13:29.117130 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:13:29.117136 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:13:29.117141 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:13:29.117146 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:13:29.117151 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:13:29.117156 | orchestrator | 2026-04-06 02:13:29.117161 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-06 02:13:29.117166 | orchestrator | Monday 06 April 2026 02:13:24 +0000 (0:00:01.739) 0:00:22.011 ********** 2026-04-06 02:13:29.117172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:13:29.117184 | orchestrator | 2026-04-06 02:13:29.117189 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-06 02:13:29.117194 | orchestrator | Monday 06 April 2026 02:13:25 +0000 (0:00:01.360) 0:00:23.371 ********** 2026-04-06 02:13:29.117199 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:13:29.117204 | orchestrator | ok: [testbed-manager] 2026-04-06 02:13:29.117209 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:13:29.117214 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:13:29.117220 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:13:29.117229 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:13:29.117234 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:13:29.117239 | orchestrator | 2026-04-06 02:13:29.117264 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-06 02:13:29.117271 | orchestrator | Monday 06 April 2026 02:13:26 +0000 (0:00:01.027) 0:00:24.399 ********** 2026-04-06 02:13:29.117279 | orchestrator | ok: [testbed-manager] 2026-04-06 02:13:29.117287 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:13:29.117292 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:13:29.117297 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:13:29.117302 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:13:29.117307 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:13:29.117312 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:13:29.117317 | orchestrator | 2026-04-06 02:13:29.117323 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-06 02:13:29.117328 | orchestrator | Monday 06 April 2026 02:13:27 +0000 (0:00:00.950) 0:00:25.350 ********** 2026-04-06 02:13:29.117333 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-06 02:13:29.117338 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-06 02:13:29.117345 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-06 02:13:29.117354 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-06 02:13:29.117361 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-06 02:13:29.117370 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-06 02:13:29.117375 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-06 02:13:29.117380 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-06 02:13:29.117385 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-06 02:13:29.117390 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-06 02:13:29.117399 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-06 02:13:29.117406 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-06 02:13:29.117411 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-06 02:13:29.117416 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-06 02:13:29.117421 | orchestrator | 2026-04-06 02:13:29.117431 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-06 02:13:48.757824 | orchestrator | Monday 06 April 2026 02:13:29 +0000 (0:00:01.353) 0:00:26.703 ********** 2026-04-06 02:13:48.757921 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:13:48.757932 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:13:48.757941 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:13:48.757948 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:13:48.757956 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:13:48.757963 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:13:48.757971 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:13:48.757978 | orchestrator | 2026-04-06 02:13:48.757987 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-06 02:13:48.758057 | orchestrator | Monday 06 April 2026 02:13:29 +0000 (0:00:00.681) 0:00:27.384 ********** 2026-04-06 02:13:48.758070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-2, testbed-node-4, testbed-node-5, testbed-node-3 2026-04-06 02:13:48.758080 | orchestrator | 2026-04-06 02:13:48.758088 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-06 02:13:48.758095 | orchestrator | Monday 06 April 2026 02:13:35 +0000 (0:00:05.796) 0:00:33.181 ********** 2026-04-06 02:13:48.758104 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758130 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758279 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758295 | orchestrator | 2026-04-06 02:13:48.758303 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-06 02:13:48.758311 | orchestrator | Monday 06 April 2026 02:13:42 +0000 (0:00:06.676) 0:00:39.858 ********** 2026-04-06 02:13:48.758319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758341 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-06 02:13:48.758404 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:48.758433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:55.914403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-06 02:13:55.914547 | orchestrator | 2026-04-06 02:13:55.914567 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-06 02:13:55.914580 | orchestrator | Monday 06 April 2026 02:13:48 +0000 (0:00:06.484) 0:00:46.342 ********** 2026-04-06 02:13:55.914594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:13:55.914606 | orchestrator | 2026-04-06 02:13:55.914618 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-06 02:13:55.914629 | orchestrator | Monday 06 April 2026 02:13:50 +0000 (0:00:01.531) 0:00:47.874 ********** 2026-04-06 02:13:55.914667 | orchestrator | ok: [testbed-manager] 2026-04-06 02:13:55.914699 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:13:55.914710 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:13:55.914721 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:13:55.914732 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:13:55.914742 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:13:55.914753 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:13:55.914764 | orchestrator | 2026-04-06 02:13:55.914775 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-06 02:13:55.914786 | orchestrator | Monday 06 April 2026 02:13:51 +0000 (0:00:01.339) 0:00:49.213 ********** 2026-04-06 02:13:55.914797 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-06 02:13:55.914809 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-06 02:13:55.914820 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-06 02:13:55.914831 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-06 02:13:55.914842 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:13:55.914853 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-06 02:13:55.914864 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-06 02:13:55.914875 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-06 02:13:55.914886 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-06 02:13:55.914897 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-06 02:13:55.914908 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-06 02:13:55.914940 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-06 02:13:55.914956 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-06 02:13:55.914976 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:13:55.914995 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-06 02:13:55.915046 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-06 02:13:55.915062 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-06 02:13:55.915073 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-06 02:13:55.915084 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:13:55.915095 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-06 02:13:55.915106 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-06 02:13:55.915117 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-06 02:13:55.915128 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-06 02:13:55.915139 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:13:55.915150 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-06 02:13:55.915161 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-06 02:13:55.915171 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-06 02:13:55.915182 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-06 02:13:55.915193 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:13:55.915204 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:13:55.915215 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-06 02:13:55.915225 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-06 02:13:55.915291 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-06 02:13:55.915302 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-06 02:13:55.915313 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:13:55.915324 | orchestrator | 2026-04-06 02:13:55.915335 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-06 02:13:55.915365 | orchestrator | Monday 06 April 2026 02:13:53 +0000 (0:00:02.323) 0:00:51.537 ********** 2026-04-06 02:13:55.915377 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:13:55.915407 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:13:55.915418 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:13:55.915429 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:13:55.915440 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:13:55.915451 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:13:55.915461 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:13:55.915472 | orchestrator | 2026-04-06 02:13:55.915483 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-06 02:13:55.915494 | orchestrator | Monday 06 April 2026 02:13:54 +0000 (0:00:00.700) 0:00:52.237 ********** 2026-04-06 02:13:55.915505 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:13:55.915516 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:13:55.915527 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:13:55.915537 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:13:55.915549 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:13:55.915560 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:13:55.915571 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:13:55.915582 | orchestrator | 2026-04-06 02:13:55.915592 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:13:55.915605 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 02:13:55.915617 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 02:13:55.915638 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 02:13:55.915649 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 02:13:55.915660 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 02:13:55.915671 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 02:13:55.915681 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 02:13:55.915692 | orchestrator | 2026-04-06 02:13:55.915703 | orchestrator | 2026-04-06 02:13:55.915715 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:13:55.915726 | orchestrator | Monday 06 April 2026 02:13:55 +0000 (0:00:00.810) 0:00:53.047 ********** 2026-04-06 02:13:55.915737 | orchestrator | =============================================================================== 2026-04-06 02:13:55.915754 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.68s 2026-04-06 02:13:55.915765 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.48s 2026-04-06 02:13:55.915776 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.80s 2026-04-06 02:13:55.915787 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.73s 2026-04-06 02:13:55.915798 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.32s 2026-04-06 02:13:55.915809 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.29s 2026-04-06 02:13:55.915820 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.20s 2026-04-06 02:13:55.915831 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.99s 2026-04-06 02:13:55.915841 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.88s 2026-04-06 02:13:55.915852 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.74s 2026-04-06 02:13:55.915863 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.64s 2026-04-06 02:13:55.915874 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.53s 2026-04-06 02:13:55.915884 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.42s 2026-04-06 02:13:55.915895 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.36s 2026-04-06 02:13:55.915909 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.35s 2026-04-06 02:13:55.915927 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.34s 2026-04-06 02:13:55.915945 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.17s 2026-04-06 02:13:55.915963 | orchestrator | osism.commons.network : Create required directories --------------------- 1.06s 2026-04-06 02:13:55.915981 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.03s 2026-04-06 02:13:55.915999 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2026-04-06 02:13:56.284677 | orchestrator | + osism apply wireguard 2026-04-06 02:14:08.541068 | orchestrator | 2026-04-06 02:14:08 | INFO  | Task c89b600c-0a31-491b-9470-4a10d18f9120 (wireguard) was prepared for execution. 2026-04-06 02:14:08.541201 | orchestrator | 2026-04-06 02:14:08 | INFO  | It takes a moment until task c89b600c-0a31-491b-9470-4a10d18f9120 (wireguard) has been started and output is visible here. 2026-04-06 02:14:31.595956 | orchestrator | 2026-04-06 02:14:31.596045 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-06 02:14:31.596075 | orchestrator | 2026-04-06 02:14:31.596082 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-06 02:14:31.596089 | orchestrator | Monday 06 April 2026 02:14:13 +0000 (0:00:00.235) 0:00:00.235 ********** 2026-04-06 02:14:31.596095 | orchestrator | ok: [testbed-manager] 2026-04-06 02:14:31.596102 | orchestrator | 2026-04-06 02:14:31.596108 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-06 02:14:31.596114 | orchestrator | Monday 06 April 2026 02:14:15 +0000 (0:00:01.786) 0:00:02.022 ********** 2026-04-06 02:14:31.596120 | orchestrator | changed: [testbed-manager] 2026-04-06 02:14:31.596127 | orchestrator | 2026-04-06 02:14:31.596135 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-06 02:14:31.596142 | orchestrator | Monday 06 April 2026 02:14:23 +0000 (0:00:07.748) 0:00:09.770 ********** 2026-04-06 02:14:31.596148 | orchestrator | changed: [testbed-manager] 2026-04-06 02:14:31.596153 | orchestrator | 2026-04-06 02:14:31.596159 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-06 02:14:31.596165 | orchestrator | Monday 06 April 2026 02:14:23 +0000 (0:00:00.651) 0:00:10.422 ********** 2026-04-06 02:14:31.596170 | orchestrator | changed: [testbed-manager] 2026-04-06 02:14:31.596176 | orchestrator | 2026-04-06 02:14:31.596182 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-06 02:14:31.596188 | orchestrator | Monday 06 April 2026 02:14:24 +0000 (0:00:00.475) 0:00:10.898 ********** 2026-04-06 02:14:31.596193 | orchestrator | ok: [testbed-manager] 2026-04-06 02:14:31.596199 | orchestrator | 2026-04-06 02:14:31.596205 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-06 02:14:31.596237 | orchestrator | Monday 06 April 2026 02:14:24 +0000 (0:00:00.760) 0:00:11.659 ********** 2026-04-06 02:14:31.596243 | orchestrator | ok: [testbed-manager] 2026-04-06 02:14:31.596249 | orchestrator | 2026-04-06 02:14:31.596255 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-06 02:14:31.596261 | orchestrator | Monday 06 April 2026 02:14:25 +0000 (0:00:00.429) 0:00:12.088 ********** 2026-04-06 02:14:31.596267 | orchestrator | ok: [testbed-manager] 2026-04-06 02:14:31.596273 | orchestrator | 2026-04-06 02:14:31.596278 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-06 02:14:31.596284 | orchestrator | Monday 06 April 2026 02:14:25 +0000 (0:00:00.464) 0:00:12.553 ********** 2026-04-06 02:14:31.596290 | orchestrator | changed: [testbed-manager] 2026-04-06 02:14:31.596296 | orchestrator | 2026-04-06 02:14:31.596302 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-06 02:14:31.596307 | orchestrator | Monday 06 April 2026 02:14:27 +0000 (0:00:01.310) 0:00:13.864 ********** 2026-04-06 02:14:31.596313 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-06 02:14:31.596319 | orchestrator | changed: [testbed-manager] 2026-04-06 02:14:31.596325 | orchestrator | 2026-04-06 02:14:31.596331 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-06 02:14:31.596337 | orchestrator | Monday 06 April 2026 02:14:28 +0000 (0:00:01.028) 0:00:14.892 ********** 2026-04-06 02:14:31.596342 | orchestrator | changed: [testbed-manager] 2026-04-06 02:14:31.596348 | orchestrator | 2026-04-06 02:14:31.596354 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-06 02:14:31.596360 | orchestrator | Monday 06 April 2026 02:14:30 +0000 (0:00:01.910) 0:00:16.802 ********** 2026-04-06 02:14:31.596366 | orchestrator | changed: [testbed-manager] 2026-04-06 02:14:31.596372 | orchestrator | 2026-04-06 02:14:31.596378 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:14:31.596384 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:14:31.596391 | orchestrator | 2026-04-06 02:14:31.596404 | orchestrator | 2026-04-06 02:14:31.596410 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:14:31.596416 | orchestrator | Monday 06 April 2026 02:14:31 +0000 (0:00:01.106) 0:00:17.909 ********** 2026-04-06 02:14:31.596427 | orchestrator | =============================================================================== 2026-04-06 02:14:31.596433 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.75s 2026-04-06 02:14:31.596439 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.91s 2026-04-06 02:14:31.596445 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.79s 2026-04-06 02:14:31.596451 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.31s 2026-04-06 02:14:31.596456 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.11s 2026-04-06 02:14:31.596462 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.03s 2026-04-06 02:14:31.596468 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.76s 2026-04-06 02:14:31.596474 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.65s 2026-04-06 02:14:31.596480 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.48s 2026-04-06 02:14:31.596486 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2026-04-06 02:14:31.596491 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-04-06 02:14:31.947377 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-06 02:14:31.987353 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-06 02:14:31.987451 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-06 02:14:32.067567 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 184 0 --:--:-- --:--:-- --:--:-- 185 2026-04-06 02:14:32.084735 | orchestrator | + osism apply --environment custom workarounds 2026-04-06 02:14:34.215767 | orchestrator | 2026-04-06 02:14:34 | INFO  | Trying to run play workarounds in environment custom 2026-04-06 02:14:44.409427 | orchestrator | 2026-04-06 02:14:44 | INFO  | Task 3883e5fa-2652-476d-b9e5-4deccc9e473c (workarounds) was prepared for execution. 2026-04-06 02:14:44.409547 | orchestrator | 2026-04-06 02:14:44 | INFO  | It takes a moment until task 3883e5fa-2652-476d-b9e5-4deccc9e473c (workarounds) has been started and output is visible here. 2026-04-06 02:15:11.636274 | orchestrator | 2026-04-06 02:15:11.636375 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 02:15:11.636391 | orchestrator | 2026-04-06 02:15:11.636402 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-06 02:15:11.636412 | orchestrator | Monday 06 April 2026 02:14:49 +0000 (0:00:00.137) 0:00:00.137 ********** 2026-04-06 02:15:11.636423 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-06 02:15:11.636433 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-06 02:15:11.636444 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-06 02:15:11.636461 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-06 02:15:11.636485 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-06 02:15:11.636505 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-06 02:15:11.636520 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-06 02:15:11.636535 | orchestrator | 2026-04-06 02:15:11.636550 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-06 02:15:11.636564 | orchestrator | 2026-04-06 02:15:11.636579 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-06 02:15:11.636596 | orchestrator | Monday 06 April 2026 02:14:49 +0000 (0:00:00.882) 0:00:01.020 ********** 2026-04-06 02:15:11.636613 | orchestrator | ok: [testbed-manager] 2026-04-06 02:15:11.636631 | orchestrator | 2026-04-06 02:15:11.636677 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-06 02:15:11.636696 | orchestrator | 2026-04-06 02:15:11.636712 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-06 02:15:11.636728 | orchestrator | Monday 06 April 2026 02:14:52 +0000 (0:00:02.776) 0:00:03.796 ********** 2026-04-06 02:15:11.636744 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:15:11.636761 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:15:11.636778 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:15:11.636796 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:15:11.636812 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:15:11.636829 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:15:11.636846 | orchestrator | 2026-04-06 02:15:11.636863 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-06 02:15:11.636879 | orchestrator | 2026-04-06 02:15:11.636895 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-06 02:15:11.636930 | orchestrator | Monday 06 April 2026 02:14:54 +0000 (0:00:01.831) 0:00:05.628 ********** 2026-04-06 02:15:11.636948 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-06 02:15:11.636965 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-06 02:15:11.636982 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-06 02:15:11.636999 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-06 02:15:11.637014 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-06 02:15:11.637030 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-06 02:15:11.637046 | orchestrator | 2026-04-06 02:15:11.637063 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-06 02:15:11.637079 | orchestrator | Monday 06 April 2026 02:14:56 +0000 (0:00:01.542) 0:00:07.170 ********** 2026-04-06 02:15:11.637096 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:15:11.637113 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:15:11.637129 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:15:11.637145 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:15:11.637161 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:15:11.637178 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:15:11.637269 | orchestrator | 2026-04-06 02:15:11.637286 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-06 02:15:11.637339 | orchestrator | Monday 06 April 2026 02:14:59 +0000 (0:00:03.676) 0:00:10.847 ********** 2026-04-06 02:15:11.637358 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:15:11.637373 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:15:11.637389 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:15:11.637406 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:15:11.637437 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:15:11.637467 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:15:11.637484 | orchestrator | 2026-04-06 02:15:11.637496 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-06 02:15:11.637506 | orchestrator | 2026-04-06 02:15:11.637516 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-06 02:15:11.637525 | orchestrator | Monday 06 April 2026 02:15:00 +0000 (0:00:00.819) 0:00:11.666 ********** 2026-04-06 02:15:11.637535 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:15:11.637545 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:15:11.637554 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:15:11.637564 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:15:11.637573 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:15:11.637583 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:15:11.637592 | orchestrator | changed: [testbed-manager] 2026-04-06 02:15:11.637614 | orchestrator | 2026-04-06 02:15:11.637624 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-06 02:15:11.637634 | orchestrator | Monday 06 April 2026 02:15:02 +0000 (0:00:01.737) 0:00:13.404 ********** 2026-04-06 02:15:11.637643 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:15:11.637653 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:15:11.637662 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:15:11.637672 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:15:11.637681 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:15:11.637691 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:15:11.637722 | orchestrator | changed: [testbed-manager] 2026-04-06 02:15:11.637733 | orchestrator | 2026-04-06 02:15:11.637743 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-06 02:15:11.637752 | orchestrator | Monday 06 April 2026 02:15:04 +0000 (0:00:01.715) 0:00:15.119 ********** 2026-04-06 02:15:11.637762 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:15:11.637772 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:15:11.637781 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:15:11.637791 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:15:11.637801 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:15:11.637810 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:15:11.637820 | orchestrator | ok: [testbed-manager] 2026-04-06 02:15:11.637830 | orchestrator | 2026-04-06 02:15:11.637839 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-06 02:15:11.637849 | orchestrator | Monday 06 April 2026 02:15:05 +0000 (0:00:01.676) 0:00:16.796 ********** 2026-04-06 02:15:11.637859 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:15:11.637868 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:15:11.637878 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:15:11.637887 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:15:11.637897 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:15:11.637906 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:15:11.637916 | orchestrator | changed: [testbed-manager] 2026-04-06 02:15:11.637925 | orchestrator | 2026-04-06 02:15:11.637935 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-06 02:15:11.637945 | orchestrator | Monday 06 April 2026 02:15:07 +0000 (0:00:02.030) 0:00:18.826 ********** 2026-04-06 02:15:11.637954 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:15:11.637964 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:15:11.637973 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:15:11.637983 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:15:11.637993 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:15:11.638002 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:15:11.638012 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:15:11.638068 | orchestrator | 2026-04-06 02:15:11.638079 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-06 02:15:11.638088 | orchestrator | 2026-04-06 02:15:11.638098 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-06 02:15:11.638108 | orchestrator | Monday 06 April 2026 02:15:08 +0000 (0:00:00.713) 0:00:19.540 ********** 2026-04-06 02:15:11.638117 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:15:11.638127 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:15:11.638137 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:15:11.638146 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:15:11.638156 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:15:11.638165 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:15:11.638208 | orchestrator | ok: [testbed-manager] 2026-04-06 02:15:11.638222 | orchestrator | 2026-04-06 02:15:11.638232 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:15:11.638243 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:15:11.638254 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:11.638272 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:11.638282 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:11.638291 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:11.638301 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:11.638311 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:11.638321 | orchestrator | 2026-04-06 02:15:11.638330 | orchestrator | 2026-04-06 02:15:11.638340 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:15:11.638350 | orchestrator | Monday 06 April 2026 02:15:11 +0000 (0:00:03.097) 0:00:22.637 ********** 2026-04-06 02:15:11.638360 | orchestrator | =============================================================================== 2026-04-06 02:15:11.638369 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.68s 2026-04-06 02:15:11.638379 | orchestrator | Install python3-docker -------------------------------------------------- 3.10s 2026-04-06 02:15:11.638389 | orchestrator | Apply netplan configuration --------------------------------------------- 2.78s 2026-04-06 02:15:11.638398 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.03s 2026-04-06 02:15:11.638408 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2026-04-06 02:15:11.638418 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.74s 2026-04-06 02:15:11.638427 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.72s 2026-04-06 02:15:11.638437 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.68s 2026-04-06 02:15:11.638452 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.54s 2026-04-06 02:15:11.638468 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.88s 2026-04-06 02:15:11.638484 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.82s 2026-04-06 02:15:11.638509 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.71s 2026-04-06 02:15:12.457692 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-06 02:15:24.809406 | orchestrator | 2026-04-06 02:15:24 | INFO  | Task 9b071c8a-6b18-4c04-b324-871dd5c9fcc4 (reboot) was prepared for execution. 2026-04-06 02:15:24.809495 | orchestrator | 2026-04-06 02:15:24 | INFO  | It takes a moment until task 9b071c8a-6b18-4c04-b324-871dd5c9fcc4 (reboot) has been started and output is visible here. 2026-04-06 02:15:35.874519 | orchestrator | 2026-04-06 02:15:35.874660 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-06 02:15:35.874679 | orchestrator | 2026-04-06 02:15:35.874690 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-06 02:15:35.874700 | orchestrator | Monday 06 April 2026 02:15:29 +0000 (0:00:00.233) 0:00:00.233 ********** 2026-04-06 02:15:35.874711 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:15:35.874722 | orchestrator | 2026-04-06 02:15:35.874732 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-06 02:15:35.874742 | orchestrator | Monday 06 April 2026 02:15:29 +0000 (0:00:00.112) 0:00:00.346 ********** 2026-04-06 02:15:35.874751 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:15:35.874761 | orchestrator | 2026-04-06 02:15:35.874771 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-06 02:15:35.874805 | orchestrator | Monday 06 April 2026 02:15:30 +0000 (0:00:00.965) 0:00:01.312 ********** 2026-04-06 02:15:35.874815 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:15:35.874825 | orchestrator | 2026-04-06 02:15:35.874834 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-06 02:15:35.874844 | orchestrator | 2026-04-06 02:15:35.874853 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-06 02:15:35.874863 | orchestrator | Monday 06 April 2026 02:15:30 +0000 (0:00:00.143) 0:00:01.456 ********** 2026-04-06 02:15:35.874873 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:15:35.874882 | orchestrator | 2026-04-06 02:15:35.874892 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-06 02:15:35.874901 | orchestrator | Monday 06 April 2026 02:15:30 +0000 (0:00:00.127) 0:00:01.583 ********** 2026-04-06 02:15:35.874911 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:15:35.874920 | orchestrator | 2026-04-06 02:15:35.874930 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-06 02:15:35.874953 | orchestrator | Monday 06 April 2026 02:15:31 +0000 (0:00:00.701) 0:00:02.285 ********** 2026-04-06 02:15:35.874963 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:15:35.874973 | orchestrator | 2026-04-06 02:15:35.874983 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-06 02:15:35.874992 | orchestrator | 2026-04-06 02:15:35.875002 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-06 02:15:35.875011 | orchestrator | Monday 06 April 2026 02:15:31 +0000 (0:00:00.126) 0:00:02.411 ********** 2026-04-06 02:15:35.875022 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:15:35.875033 | orchestrator | 2026-04-06 02:15:35.875044 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-06 02:15:35.875056 | orchestrator | Monday 06 April 2026 02:15:31 +0000 (0:00:00.227) 0:00:02.639 ********** 2026-04-06 02:15:35.875067 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:15:35.875078 | orchestrator | 2026-04-06 02:15:35.875089 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-06 02:15:35.875100 | orchestrator | Monday 06 April 2026 02:15:32 +0000 (0:00:00.659) 0:00:03.299 ********** 2026-04-06 02:15:35.875111 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:15:35.875122 | orchestrator | 2026-04-06 02:15:35.875133 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-06 02:15:35.875144 | orchestrator | 2026-04-06 02:15:35.875155 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-06 02:15:35.875190 | orchestrator | Monday 06 April 2026 02:15:32 +0000 (0:00:00.127) 0:00:03.426 ********** 2026-04-06 02:15:35.875210 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:15:35.875224 | orchestrator | 2026-04-06 02:15:35.875234 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-06 02:15:35.875245 | orchestrator | Monday 06 April 2026 02:15:32 +0000 (0:00:00.124) 0:00:03.551 ********** 2026-04-06 02:15:35.875256 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:15:35.875268 | orchestrator | 2026-04-06 02:15:35.875278 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-06 02:15:35.875290 | orchestrator | Monday 06 April 2026 02:15:33 +0000 (0:00:00.677) 0:00:04.229 ********** 2026-04-06 02:15:35.875301 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:15:35.875312 | orchestrator | 2026-04-06 02:15:35.875329 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-06 02:15:35.875352 | orchestrator | 2026-04-06 02:15:35.875371 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-06 02:15:35.875387 | orchestrator | Monday 06 April 2026 02:15:33 +0000 (0:00:00.123) 0:00:04.353 ********** 2026-04-06 02:15:35.875402 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:15:35.875417 | orchestrator | 2026-04-06 02:15:35.875432 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-06 02:15:35.875447 | orchestrator | Monday 06 April 2026 02:15:33 +0000 (0:00:00.120) 0:00:04.473 ********** 2026-04-06 02:15:35.875476 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:15:35.875493 | orchestrator | 2026-04-06 02:15:35.875509 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-06 02:15:35.875524 | orchestrator | Monday 06 April 2026 02:15:34 +0000 (0:00:00.662) 0:00:05.135 ********** 2026-04-06 02:15:35.875540 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:15:35.875557 | orchestrator | 2026-04-06 02:15:35.875573 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-06 02:15:35.875591 | orchestrator | 2026-04-06 02:15:35.875607 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-06 02:15:35.875623 | orchestrator | Monday 06 April 2026 02:15:34 +0000 (0:00:00.134) 0:00:05.270 ********** 2026-04-06 02:15:35.875637 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:15:35.875646 | orchestrator | 2026-04-06 02:15:35.875656 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-06 02:15:35.875666 | orchestrator | Monday 06 April 2026 02:15:34 +0000 (0:00:00.102) 0:00:05.373 ********** 2026-04-06 02:15:35.875675 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:15:35.875685 | orchestrator | 2026-04-06 02:15:35.875694 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-06 02:15:35.875704 | orchestrator | Monday 06 April 2026 02:15:35 +0000 (0:00:00.706) 0:00:06.080 ********** 2026-04-06 02:15:35.875733 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:15:35.875744 | orchestrator | 2026-04-06 02:15:35.875753 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:15:35.875764 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:35.875775 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:35.875785 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:35.875794 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:35.875804 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:35.875814 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:15:35.875825 | orchestrator | 2026-04-06 02:15:35.875843 | orchestrator | 2026-04-06 02:15:35.875858 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:15:35.875873 | orchestrator | Monday 06 April 2026 02:15:35 +0000 (0:00:00.044) 0:00:06.124 ********** 2026-04-06 02:15:35.875897 | orchestrator | =============================================================================== 2026-04-06 02:15:35.875914 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.38s 2026-04-06 02:15:35.875930 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.82s 2026-04-06 02:15:35.875947 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.70s 2026-04-06 02:15:36.214834 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-06 02:15:48.458623 | orchestrator | 2026-04-06 02:15:48 | INFO  | Task bcc9f874-dec1-4894-96a2-c1fe4ef62279 (wait-for-connection) was prepared for execution. 2026-04-06 02:15:48.458736 | orchestrator | 2026-04-06 02:15:48 | INFO  | It takes a moment until task bcc9f874-dec1-4894-96a2-c1fe4ef62279 (wait-for-connection) has been started and output is visible here. 2026-04-06 02:16:05.105199 | orchestrator | 2026-04-06 02:16:05.105313 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-06 02:16:05.105359 | orchestrator | 2026-04-06 02:16:05.105372 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-06 02:16:05.105384 | orchestrator | Monday 06 April 2026 02:15:53 +0000 (0:00:00.256) 0:00:00.256 ********** 2026-04-06 02:16:05.105396 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:16:05.105408 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:16:05.105419 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:16:05.105430 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:16:05.105441 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:16:05.105452 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:16:05.105463 | orchestrator | 2026-04-06 02:16:05.105474 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:16:05.105486 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:16:05.105499 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:16:05.105510 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:16:05.105522 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:16:05.105533 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:16:05.105544 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:16:05.105555 | orchestrator | 2026-04-06 02:16:05.105567 | orchestrator | 2026-04-06 02:16:05.105579 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:16:05.105590 | orchestrator | Monday 06 April 2026 02:16:04 +0000 (0:00:11.533) 0:00:11.789 ********** 2026-04-06 02:16:05.105601 | orchestrator | =============================================================================== 2026-04-06 02:16:05.105612 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.53s 2026-04-06 02:16:05.525938 | orchestrator | + osism apply hddtemp 2026-04-06 02:16:18.172350 | orchestrator | 2026-04-06 02:16:18 | INFO  | Task 2a8362d9-de11-4b14-8e7b-9bee93473841 (hddtemp) was prepared for execution. 2026-04-06 02:16:18.172453 | orchestrator | 2026-04-06 02:16:18 | INFO  | It takes a moment until task 2a8362d9-de11-4b14-8e7b-9bee93473841 (hddtemp) has been started and output is visible here. 2026-04-06 02:16:47.494902 | orchestrator | 2026-04-06 02:16:47.494997 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-06 02:16:47.495008 | orchestrator | 2026-04-06 02:16:47.495015 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-06 02:16:47.495022 | orchestrator | Monday 06 April 2026 02:16:23 +0000 (0:00:00.320) 0:00:00.320 ********** 2026-04-06 02:16:47.495029 | orchestrator | ok: [testbed-manager] 2026-04-06 02:16:47.495036 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:16:47.495043 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:16:47.495082 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:16:47.495090 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:16:47.495097 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:16:47.495103 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:16:47.495110 | orchestrator | 2026-04-06 02:16:47.495146 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-06 02:16:47.495155 | orchestrator | Monday 06 April 2026 02:16:23 +0000 (0:00:00.805) 0:00:01.125 ********** 2026-04-06 02:16:47.495163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:16:47.495191 | orchestrator | 2026-04-06 02:16:47.495198 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-06 02:16:47.495204 | orchestrator | Monday 06 April 2026 02:16:25 +0000 (0:00:01.366) 0:00:02.492 ********** 2026-04-06 02:16:47.495210 | orchestrator | ok: [testbed-manager] 2026-04-06 02:16:47.495216 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:16:47.495221 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:16:47.495227 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:16:47.495233 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:16:47.495239 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:16:47.495245 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:16:47.495251 | orchestrator | 2026-04-06 02:16:47.495257 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-06 02:16:47.495274 | orchestrator | Monday 06 April 2026 02:16:27 +0000 (0:00:02.075) 0:00:04.568 ********** 2026-04-06 02:16:47.495281 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:16:47.495287 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:16:47.495293 | orchestrator | changed: [testbed-manager] 2026-04-06 02:16:47.495299 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:16:47.495304 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:16:47.495310 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:16:47.495316 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:16:47.495321 | orchestrator | 2026-04-06 02:16:47.495327 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-06 02:16:47.495333 | orchestrator | Monday 06 April 2026 02:16:28 +0000 (0:00:01.254) 0:00:05.822 ********** 2026-04-06 02:16:47.495339 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:16:47.495345 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:16:47.495350 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:16:47.495356 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:16:47.495362 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:16:47.495368 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:16:47.495374 | orchestrator | ok: [testbed-manager] 2026-04-06 02:16:47.495379 | orchestrator | 2026-04-06 02:16:47.495385 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-06 02:16:47.495391 | orchestrator | Monday 06 April 2026 02:16:29 +0000 (0:00:01.240) 0:00:07.063 ********** 2026-04-06 02:16:47.495397 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:16:47.495403 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:16:47.495408 | orchestrator | changed: [testbed-manager] 2026-04-06 02:16:47.495414 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:16:47.495420 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:16:47.495426 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:16:47.495431 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:16:47.495437 | orchestrator | 2026-04-06 02:16:47.495443 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-06 02:16:47.495449 | orchestrator | Monday 06 April 2026 02:16:30 +0000 (0:00:00.993) 0:00:08.056 ********** 2026-04-06 02:16:47.495455 | orchestrator | changed: [testbed-manager] 2026-04-06 02:16:47.495460 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:16:47.495466 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:16:47.495472 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:16:47.495477 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:16:47.495483 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:16:47.495489 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:16:47.495495 | orchestrator | 2026-04-06 02:16:47.495500 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-06 02:16:47.495506 | orchestrator | Monday 06 April 2026 02:16:43 +0000 (0:00:12.689) 0:00:20.745 ********** 2026-04-06 02:16:47.495512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:16:47.495519 | orchestrator | 2026-04-06 02:16:47.495531 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-06 02:16:47.495537 | orchestrator | Monday 06 April 2026 02:16:45 +0000 (0:00:01.456) 0:00:22.201 ********** 2026-04-06 02:16:47.495542 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:16:47.495548 | orchestrator | changed: [testbed-manager] 2026-04-06 02:16:47.495554 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:16:47.495560 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:16:47.495566 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:16:47.495572 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:16:47.495577 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:16:47.495583 | orchestrator | 2026-04-06 02:16:47.495589 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:16:47.495595 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:16:47.495615 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:16:47.495622 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:16:47.495627 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:16:47.495633 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:16:47.495639 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:16:47.495645 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:16:47.495651 | orchestrator | 2026-04-06 02:16:47.495657 | orchestrator | 2026-04-06 02:16:47.495663 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:16:47.495668 | orchestrator | Monday 06 April 2026 02:16:46 +0000 (0:00:01.986) 0:00:24.187 ********** 2026-04-06 02:16:47.495674 | orchestrator | =============================================================================== 2026-04-06 02:16:47.495680 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.69s 2026-04-06 02:16:47.495686 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.08s 2026-04-06 02:16:47.495692 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.99s 2026-04-06 02:16:47.495701 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.46s 2026-04-06 02:16:47.495707 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.37s 2026-04-06 02:16:47.495713 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.25s 2026-04-06 02:16:47.495719 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.24s 2026-04-06 02:16:47.495725 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.99s 2026-04-06 02:16:47.495730 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.81s 2026-04-06 02:16:47.900924 | orchestrator | ++ semver 9.5.0 7.1.1 2026-04-06 02:16:47.952302 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 02:16:47.952388 | orchestrator | + sudo systemctl restart manager.service 2026-04-06 02:17:05.886492 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-06 02:17:05.886590 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-06 02:17:05.887215 | orchestrator | + local max_attempts=60 2026-04-06 02:17:05.887239 | orchestrator | + local name=ceph-ansible 2026-04-06 02:17:05.887248 | orchestrator | + local attempt_num=1 2026-04-06 02:17:05.887258 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:05.926142 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:05.926246 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:05.926277 | orchestrator | + sleep 5 2026-04-06 02:17:10.932723 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:10.964022 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:10.964190 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:10.964217 | orchestrator | + sleep 5 2026-04-06 02:17:15.967121 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:16.058993 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:16.059126 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:16.059141 | orchestrator | + sleep 5 2026-04-06 02:17:21.062584 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:21.102206 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:21.102297 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:21.102309 | orchestrator | + sleep 5 2026-04-06 02:17:26.106579 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:26.150460 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:26.150564 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:26.150584 | orchestrator | + sleep 5 2026-04-06 02:17:31.154642 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:31.193701 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:31.193790 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:31.193802 | orchestrator | + sleep 5 2026-04-06 02:17:36.198672 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:36.234949 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:36.235015 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:36.235020 | orchestrator | + sleep 5 2026-04-06 02:17:41.243800 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:41.286234 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:41.286334 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:41.286350 | orchestrator | + sleep 5 2026-04-06 02:17:46.289262 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:46.381368 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:46.381459 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:46.381473 | orchestrator | + sleep 5 2026-04-06 02:17:51.388033 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:51.439278 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:51.440648 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:51.440998 | orchestrator | + sleep 5 2026-04-06 02:17:56.444678 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:17:56.477376 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-06 02:17:56.477475 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:17:56.477489 | orchestrator | + sleep 5 2026-04-06 02:18:01.481295 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:18:01.509602 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-06 02:18:01.509687 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:18:01.509698 | orchestrator | + sleep 5 2026-04-06 02:18:06.514142 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:18:06.556847 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-06 02:18:06.556963 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-06 02:18:06.556978 | orchestrator | + sleep 5 2026-04-06 02:18:11.560640 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-06 02:18:11.602620 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-06 02:18:11.602714 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-06 02:18:11.602734 | orchestrator | + local max_attempts=60 2026-04-06 02:18:11.602750 | orchestrator | + local name=kolla-ansible 2026-04-06 02:18:11.602765 | orchestrator | + local attempt_num=1 2026-04-06 02:18:11.603322 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-06 02:18:11.644768 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-06 02:18:11.644880 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-06 02:18:11.644891 | orchestrator | + local max_attempts=60 2026-04-06 02:18:11.644925 | orchestrator | + local name=osism-ansible 2026-04-06 02:18:11.644933 | orchestrator | + local attempt_num=1 2026-04-06 02:18:11.646296 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-06 02:18:11.687485 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-06 02:18:11.687576 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-06 02:18:11.687590 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-06 02:18:11.872184 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-06 02:18:12.039982 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-06 02:18:12.194434 | orchestrator | ARA in osism-ansible already disabled. 2026-04-06 02:18:12.365394 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-06 02:18:12.365974 | orchestrator | + osism apply gather-facts 2026-04-06 02:18:24.857617 | orchestrator | 2026-04-06 02:18:24 | INFO  | Task a5fcbe5a-37c2-415b-9c22-b4312e302f4c (gather-facts) was prepared for execution. 2026-04-06 02:18:24.857724 | orchestrator | 2026-04-06 02:18:24 | INFO  | It takes a moment until task a5fcbe5a-37c2-415b-9c22-b4312e302f4c (gather-facts) has been started and output is visible here. 2026-04-06 02:18:39.790387 | orchestrator | 2026-04-06 02:18:39.791432 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-06 02:18:39.791501 | orchestrator | 2026-04-06 02:18:39.791517 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-06 02:18:39.791532 | orchestrator | Monday 06 April 2026 02:18:30 +0000 (0:00:00.300) 0:00:00.300 ********** 2026-04-06 02:18:39.791545 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:18:39.791559 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:18:39.791572 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:18:39.791585 | orchestrator | ok: [testbed-manager] 2026-04-06 02:18:39.791598 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:18:39.791610 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:18:39.791623 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:18:39.791635 | orchestrator | 2026-04-06 02:18:39.791648 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-06 02:18:39.791659 | orchestrator | 2026-04-06 02:18:39.791683 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-06 02:18:39.791695 | orchestrator | Monday 06 April 2026 02:18:38 +0000 (0:00:08.397) 0:00:08.698 ********** 2026-04-06 02:18:39.791708 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:18:39.791720 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:18:39.791733 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:18:39.791746 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:18:39.791758 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:18:39.791771 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:18:39.791782 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:18:39.791795 | orchestrator | 2026-04-06 02:18:39.791808 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:18:39.791821 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:18:39.791836 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:18:39.791848 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:18:39.791861 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:18:39.791873 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:18:39.791886 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:18:39.791898 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 02:18:39.791939 | orchestrator | 2026-04-06 02:18:39.791953 | orchestrator | 2026-04-06 02:18:39.791966 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:18:39.791978 | orchestrator | Monday 06 April 2026 02:18:39 +0000 (0:00:00.610) 0:00:09.309 ********** 2026-04-06 02:18:39.791991 | orchestrator | =============================================================================== 2026-04-06 02:18:39.792004 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.40s 2026-04-06 02:18:39.792017 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-04-06 02:18:40.191680 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-06 02:18:40.209879 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-06 02:18:40.225141 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-06 02:18:40.244201 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-06 02:18:40.258347 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-06 02:18:40.273243 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-06 02:18:40.285398 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-06 02:18:40.297533 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-06 02:18:40.317864 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-06 02:18:40.331760 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-06 02:18:40.348072 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-06 02:18:40.360821 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-06 02:18:40.375884 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-06 02:18:40.391717 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-06 02:18:40.405369 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-06 02:18:40.417299 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-06 02:18:40.430341 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-06 02:18:40.443419 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-06 02:18:40.456956 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-06 02:18:40.470587 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-06 02:18:40.486523 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-06 02:18:40.503267 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-06 02:18:40.518378 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-06 02:18:40.532481 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-06 02:18:40.717047 | orchestrator | ok: Runtime: 0:25:51.938282 2026-04-06 02:18:40.811156 | 2026-04-06 02:18:40.811293 | TASK [Deploy services] 2026-04-06 02:18:41.493476 | orchestrator | 2026-04-06 02:18:41.493605 | orchestrator | # DEPLOY SERVICES 2026-04-06 02:18:41.493615 | orchestrator | 2026-04-06 02:18:41.493621 | orchestrator | + set -e 2026-04-06 02:18:41.493626 | orchestrator | + echo 2026-04-06 02:18:41.493632 | orchestrator | + echo '# DEPLOY SERVICES' 2026-04-06 02:18:41.493638 | orchestrator | + echo 2026-04-06 02:18:41.493660 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 02:18:41.493669 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 02:18:41.493676 | orchestrator | ++ INTERACTIVE=false 2026-04-06 02:18:41.493682 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 02:18:41.493692 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 02:18:41.493696 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 02:18:41.493702 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 02:18:41.493707 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 02:18:41.493714 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 02:18:41.493719 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 02:18:41.493725 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 02:18:41.493729 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 02:18:41.493737 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 02:18:41.493742 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 02:18:41.493746 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 02:18:41.493752 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 02:18:41.493757 | orchestrator | ++ export ARA=false 2026-04-06 02:18:41.493761 | orchestrator | ++ ARA=false 2026-04-06 02:18:41.493766 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 02:18:41.493770 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 02:18:41.493775 | orchestrator | ++ export TEMPEST=false 2026-04-06 02:18:41.493779 | orchestrator | ++ TEMPEST=false 2026-04-06 02:18:41.493783 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 02:18:41.493787 | orchestrator | ++ IS_ZUUL=true 2026-04-06 02:18:41.493792 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 02:18:41.493796 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 02:18:41.493801 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 02:18:41.493805 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 02:18:41.493810 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 02:18:41.493814 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 02:18:41.493818 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 02:18:41.493822 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 02:18:41.493826 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 02:18:41.493833 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 02:18:41.493837 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-06 02:18:41.500604 | orchestrator | + set -e 2026-04-06 02:18:41.501268 | orchestrator | 2026-04-06 02:18:41.501297 | orchestrator | # PULL IMAGES 2026-04-06 02:18:41.501302 | orchestrator | 2026-04-06 02:18:41.501307 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 02:18:41.501314 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 02:18:41.501320 | orchestrator | ++ INTERACTIVE=false 2026-04-06 02:18:41.501324 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 02:18:41.501328 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 02:18:41.501332 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 02:18:41.501336 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 02:18:41.501340 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 02:18:41.501344 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 02:18:41.501347 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 02:18:41.501352 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 02:18:41.501356 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 02:18:41.501360 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 02:18:41.501364 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 02:18:41.501368 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 02:18:41.501372 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 02:18:41.501376 | orchestrator | ++ export ARA=false 2026-04-06 02:18:41.501380 | orchestrator | ++ ARA=false 2026-04-06 02:18:41.501387 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 02:18:41.501390 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 02:18:41.501394 | orchestrator | ++ export TEMPEST=false 2026-04-06 02:18:41.501398 | orchestrator | ++ TEMPEST=false 2026-04-06 02:18:41.501402 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 02:18:41.501405 | orchestrator | ++ IS_ZUUL=true 2026-04-06 02:18:41.501409 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 02:18:41.501413 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 02:18:41.501417 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 02:18:41.501421 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 02:18:41.501425 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 02:18:41.501428 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 02:18:41.501452 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 02:18:41.501456 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 02:18:41.501460 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 02:18:41.501464 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 02:18:41.501468 | orchestrator | + echo 2026-04-06 02:18:41.501471 | orchestrator | + echo '# PULL IMAGES' 2026-04-06 02:18:41.501475 | orchestrator | + echo 2026-04-06 02:18:41.501483 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-06 02:18:41.543390 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 02:18:41.543543 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-06 02:18:43.710414 | orchestrator | 2026-04-06 02:18:43 | INFO  | Trying to run play pull-images in environment custom 2026-04-06 02:18:53.892562 | orchestrator | 2026-04-06 02:18:53 | INFO  | Task 94515e7d-cfb4-489a-95af-8d91aa0ce81b (pull-images) was prepared for execution. 2026-04-06 02:18:53.892654 | orchestrator | 2026-04-06 02:18:53 | INFO  | Task 94515e7d-cfb4-489a-95af-8d91aa0ce81b is running in background. No more output. Check ARA for logs. 2026-04-06 02:18:54.308082 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-04-06 02:19:06.552636 | orchestrator | 2026-04-06 02:19:06 | INFO  | Task 24d43bc8-f146-47dd-87ab-1768e5c091ad (cgit) was prepared for execution. 2026-04-06 02:19:06.552771 | orchestrator | 2026-04-06 02:19:06 | INFO  | Task 24d43bc8-f146-47dd-87ab-1768e5c091ad is running in background. No more output. Check ARA for logs. 2026-04-06 02:19:19.110566 | orchestrator | 2026-04-06 02:19:19 | INFO  | Task 557bfd92-07a0-4aff-9843-508bfcc39019 (dotfiles) was prepared for execution. 2026-04-06 02:19:19.110647 | orchestrator | 2026-04-06 02:19:19 | INFO  | Task 557bfd92-07a0-4aff-9843-508bfcc39019 is running in background. No more output. Check ARA for logs. 2026-04-06 02:19:32.097611 | orchestrator | 2026-04-06 02:19:32 | INFO  | Task 4e24346e-f42c-4811-9999-4925a3a691a3 (homer) was prepared for execution. 2026-04-06 02:19:32.097702 | orchestrator | 2026-04-06 02:19:32 | INFO  | Task 4e24346e-f42c-4811-9999-4925a3a691a3 is running in background. No more output. Check ARA for logs. 2026-04-06 02:19:44.992653 | orchestrator | 2026-04-06 02:19:44 | INFO  | Task 84eef5e5-c6f4-45c2-aad5-115d67573f90 (phpmyadmin) was prepared for execution. 2026-04-06 02:19:44.992774 | orchestrator | 2026-04-06 02:19:44 | INFO  | Task 84eef5e5-c6f4-45c2-aad5-115d67573f90 is running in background. No more output. Check ARA for logs. 2026-04-06 02:19:58.078896 | orchestrator | 2026-04-06 02:19:58 | INFO  | Task 6cb83a29-340d-42eb-b045-f42aaab9f7f5 (sosreport) was prepared for execution. 2026-04-06 02:19:58.079084 | orchestrator | 2026-04-06 02:19:58 | INFO  | Task 6cb83a29-340d-42eb-b045-f42aaab9f7f5 is running in background. No more output. Check ARA for logs. 2026-04-06 02:19:58.454935 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-04-06 02:19:58.464345 | orchestrator | + set -e 2026-04-06 02:19:58.464427 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 02:19:58.464441 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 02:19:58.464452 | orchestrator | ++ INTERACTIVE=false 2026-04-06 02:19:58.464465 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 02:19:58.464475 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 02:19:58.464485 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 02:19:58.464494 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 02:19:58.464504 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 02:19:58.464514 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 02:19:58.464523 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 02:19:58.464533 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 02:19:58.464543 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 02:19:58.464557 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 02:19:58.464574 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 02:19:58.464600 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 02:19:58.464618 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 02:19:58.464633 | orchestrator | ++ export ARA=false 2026-04-06 02:19:58.464649 | orchestrator | ++ ARA=false 2026-04-06 02:19:58.464666 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 02:19:58.464715 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 02:19:58.464735 | orchestrator | ++ export TEMPEST=false 2026-04-06 02:19:58.464753 | orchestrator | ++ TEMPEST=false 2026-04-06 02:19:58.464769 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 02:19:58.464784 | orchestrator | ++ IS_ZUUL=true 2026-04-06 02:19:58.464809 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 02:19:58.464825 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 02:19:58.464835 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 02:19:58.464845 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 02:19:58.464855 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 02:19:58.464864 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 02:19:58.464874 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 02:19:58.464883 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 02:19:58.464893 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 02:19:58.464903 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 02:19:58.465285 | orchestrator | ++ semver 9.5.0 8.0.3 2026-04-06 02:19:58.542154 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 02:19:58.542233 | orchestrator | + osism apply frr 2026-04-06 02:20:10.921995 | orchestrator | 2026-04-06 02:20:10 | INFO  | Task 8baa6252-7ef7-4bab-a1dc-56b3da98d6bc (frr) was prepared for execution. 2026-04-06 02:20:10.922194 | orchestrator | 2026-04-06 02:20:10 | INFO  | It takes a moment until task 8baa6252-7ef7-4bab-a1dc-56b3da98d6bc (frr) has been started and output is visible here. 2026-04-06 02:20:49.052154 | orchestrator | 2026-04-06 02:20:49.052234 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-06 02:20:49.052243 | orchestrator | 2026-04-06 02:20:49.052250 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-06 02:20:49.052261 | orchestrator | Monday 06 April 2026 02:20:18 +0000 (0:00:00.321) 0:00:00.321 ********** 2026-04-06 02:20:49.052266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-06 02:20:49.052274 | orchestrator | 2026-04-06 02:20:49.052279 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-06 02:20:49.052284 | orchestrator | Monday 06 April 2026 02:20:18 +0000 (0:00:00.259) 0:00:00.581 ********** 2026-04-06 02:20:49.052290 | orchestrator | changed: [testbed-manager] 2026-04-06 02:20:49.052295 | orchestrator | 2026-04-06 02:20:49.052300 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-06 02:20:49.052307 | orchestrator | Monday 06 April 2026 02:20:21 +0000 (0:00:03.261) 0:00:03.842 ********** 2026-04-06 02:20:49.052312 | orchestrator | changed: [testbed-manager] 2026-04-06 02:20:49.052317 | orchestrator | 2026-04-06 02:20:49.052322 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-06 02:20:49.052327 | orchestrator | Monday 06 April 2026 02:20:35 +0000 (0:00:14.427) 0:00:18.270 ********** 2026-04-06 02:20:49.052331 | orchestrator | ok: [testbed-manager] 2026-04-06 02:20:49.052337 | orchestrator | 2026-04-06 02:20:49.052342 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-06 02:20:49.052347 | orchestrator | Monday 06 April 2026 02:20:37 +0000 (0:00:01.347) 0:00:19.617 ********** 2026-04-06 02:20:49.052352 | orchestrator | changed: [testbed-manager] 2026-04-06 02:20:49.052357 | orchestrator | 2026-04-06 02:20:49.052362 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-06 02:20:49.052366 | orchestrator | Monday 06 April 2026 02:20:39 +0000 (0:00:01.679) 0:00:21.297 ********** 2026-04-06 02:20:49.052371 | orchestrator | ok: [testbed-manager] 2026-04-06 02:20:49.052376 | orchestrator | 2026-04-06 02:20:49.052381 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-06 02:20:49.052387 | orchestrator | Monday 06 April 2026 02:20:40 +0000 (0:00:01.616) 0:00:22.914 ********** 2026-04-06 02:20:49.052392 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:20:49.052397 | orchestrator | 2026-04-06 02:20:49.052402 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-06 02:20:49.052407 | orchestrator | Monday 06 April 2026 02:20:40 +0000 (0:00:00.151) 0:00:23.066 ********** 2026-04-06 02:20:49.052425 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:20:49.052431 | orchestrator | 2026-04-06 02:20:49.052436 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-06 02:20:49.052441 | orchestrator | Monday 06 April 2026 02:20:40 +0000 (0:00:00.162) 0:00:23.228 ********** 2026-04-06 02:20:49.052446 | orchestrator | changed: [testbed-manager] 2026-04-06 02:20:49.052451 | orchestrator | 2026-04-06 02:20:49.052455 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-06 02:20:49.052460 | orchestrator | Monday 06 April 2026 02:20:42 +0000 (0:00:01.298) 0:00:24.526 ********** 2026-04-06 02:20:49.052466 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-06 02:20:49.052470 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-06 02:20:49.052477 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-06 02:20:49.052482 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-06 02:20:49.052487 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-06 02:20:49.052492 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-06 02:20:49.052496 | orchestrator | 2026-04-06 02:20:49.052501 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-06 02:20:49.052506 | orchestrator | Monday 06 April 2026 02:20:45 +0000 (0:00:02.967) 0:00:27.493 ********** 2026-04-06 02:20:49.052511 | orchestrator | ok: [testbed-manager] 2026-04-06 02:20:49.052516 | orchestrator | 2026-04-06 02:20:49.052521 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-06 02:20:49.052526 | orchestrator | Monday 06 April 2026 02:20:47 +0000 (0:00:01.890) 0:00:29.384 ********** 2026-04-06 02:20:49.052531 | orchestrator | changed: [testbed-manager] 2026-04-06 02:20:49.052535 | orchestrator | 2026-04-06 02:20:49.052540 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:20:49.052546 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:20:49.052551 | orchestrator | 2026-04-06 02:20:49.052556 | orchestrator | 2026-04-06 02:20:49.052564 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:20:49.052569 | orchestrator | Monday 06 April 2026 02:20:48 +0000 (0:00:01.547) 0:00:30.931 ********** 2026-04-06 02:20:49.052574 | orchestrator | =============================================================================== 2026-04-06 02:20:49.052579 | orchestrator | osism.services.frr : Install frr package ------------------------------- 14.43s 2026-04-06 02:20:49.052584 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 3.26s 2026-04-06 02:20:49.052589 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.97s 2026-04-06 02:20:49.052594 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.89s 2026-04-06 02:20:49.052598 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.68s 2026-04-06 02:20:49.052615 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.62s 2026-04-06 02:20:49.052620 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.55s 2026-04-06 02:20:49.052625 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.35s 2026-04-06 02:20:49.052630 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.30s 2026-04-06 02:20:49.052635 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.26s 2026-04-06 02:20:49.052640 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-04-06 02:20:49.052645 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-04-06 02:20:49.513266 | orchestrator | + osism apply kubernetes 2026-04-06 02:20:52.293433 | orchestrator | 2026-04-06 02:20:52 | INFO  | Task 975576aa-ede6-4868-9d5a-9fc95713eda0 (kubernetes) was prepared for execution. 2026-04-06 02:20:52.293527 | orchestrator | 2026-04-06 02:20:52 | INFO  | It takes a moment until task 975576aa-ede6-4868-9d5a-9fc95713eda0 (kubernetes) has been started and output is visible here. 2026-04-06 02:21:18.947734 | orchestrator | 2026-04-06 02:21:18.947856 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-06 02:21:18.947871 | orchestrator | 2026-04-06 02:21:18.947881 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-06 02:21:18.947891 | orchestrator | Monday 06 April 2026 02:20:58 +0000 (0:00:00.305) 0:00:00.305 ********** 2026-04-06 02:21:18.947900 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:21:18.947910 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:21:18.947919 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:21:18.947928 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:21:18.947937 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:21:18.947946 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:21:18.947954 | orchestrator | 2026-04-06 02:21:18.947964 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-06 02:21:18.947973 | orchestrator | Monday 06 April 2026 02:20:59 +0000 (0:00:00.840) 0:00:01.145 ********** 2026-04-06 02:21:18.948029 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:21:18.948045 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:21:18.948060 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:21:18.948075 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:21:18.948091 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:21:18.948105 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:21:18.948119 | orchestrator | 2026-04-06 02:21:18.948134 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-06 02:21:18.948148 | orchestrator | Monday 06 April 2026 02:20:59 +0000 (0:00:00.600) 0:00:01.746 ********** 2026-04-06 02:21:18.948163 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:21:18.948173 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:21:18.948182 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:21:18.948191 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:21:18.948200 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:21:18.948209 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:21:18.948219 | orchestrator | 2026-04-06 02:21:18.948230 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-06 02:21:18.948240 | orchestrator | Monday 06 April 2026 02:21:00 +0000 (0:00:00.838) 0:00:02.585 ********** 2026-04-06 02:21:18.948251 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:21:18.948261 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:21:18.948272 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:21:18.948286 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:21:18.948296 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:21:18.948306 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:21:18.948316 | orchestrator | 2026-04-06 02:21:18.948327 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-06 02:21:18.948338 | orchestrator | Monday 06 April 2026 02:21:02 +0000 (0:00:02.060) 0:00:04.645 ********** 2026-04-06 02:21:18.948348 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:21:18.948358 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:21:18.948368 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:21:18.948378 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:21:18.948388 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:21:18.948402 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:21:18.948416 | orchestrator | 2026-04-06 02:21:18.948439 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-06 02:21:18.948455 | orchestrator | Monday 06 April 2026 02:21:03 +0000 (0:00:01.268) 0:00:05.914 ********** 2026-04-06 02:21:18.948470 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:21:18.948512 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:21:18.948527 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:21:18.948541 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:21:18.948553 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:21:18.948567 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:21:18.948583 | orchestrator | 2026-04-06 02:21:18.948610 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-06 02:21:18.948625 | orchestrator | Monday 06 April 2026 02:21:05 +0000 (0:00:01.230) 0:00:07.144 ********** 2026-04-06 02:21:18.948639 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:21:18.948677 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:21:18.948690 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:21:18.948703 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:21:18.948716 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:21:18.948730 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:21:18.948744 | orchestrator | 2026-04-06 02:21:18.948758 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-06 02:21:18.948770 | orchestrator | Monday 06 April 2026 02:21:05 +0000 (0:00:00.752) 0:00:07.897 ********** 2026-04-06 02:21:18.948785 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:21:18.948799 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:21:18.948812 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:21:18.948826 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:21:18.948840 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:21:18.948853 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:21:18.948868 | orchestrator | 2026-04-06 02:21:18.948884 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-06 02:21:18.948900 | orchestrator | Monday 06 April 2026 02:21:06 +0000 (0:00:00.865) 0:00:08.762 ********** 2026-04-06 02:21:18.948915 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 02:21:18.948931 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 02:21:18.948947 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:21:18.948963 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 02:21:18.949005 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 02:21:18.949021 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:21:18.949037 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 02:21:18.949051 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 02:21:18.949067 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:21:18.949082 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 02:21:18.949126 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 02:21:18.949142 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:21:18.949155 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 02:21:18.949169 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 02:21:18.949182 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:21:18.949198 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 02:21:18.949213 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 02:21:18.949228 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:21:18.949242 | orchestrator | 2026-04-06 02:21:18.949252 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-06 02:21:18.949261 | orchestrator | Monday 06 April 2026 02:21:07 +0000 (0:00:00.749) 0:00:09.512 ********** 2026-04-06 02:21:18.949269 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:21:18.949278 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:21:18.949287 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:21:18.949307 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:21:18.949316 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:21:18.949325 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:21:18.949333 | orchestrator | 2026-04-06 02:21:18.949342 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-06 02:21:18.949352 | orchestrator | Monday 06 April 2026 02:21:09 +0000 (0:00:01.990) 0:00:11.502 ********** 2026-04-06 02:21:18.949361 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:21:18.949370 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:21:18.949379 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:21:18.949388 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:21:18.949396 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:21:18.949405 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:21:18.949414 | orchestrator | 2026-04-06 02:21:18.949422 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-06 02:21:18.949431 | orchestrator | Monday 06 April 2026 02:21:10 +0000 (0:00:00.921) 0:00:12.424 ********** 2026-04-06 02:21:18.949440 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:21:18.949449 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:21:18.949457 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:21:18.949466 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:21:18.949475 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:21:18.949484 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:21:18.949492 | orchestrator | 2026-04-06 02:21:18.949501 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-06 02:21:18.949510 | orchestrator | Monday 06 April 2026 02:21:15 +0000 (0:00:04.908) 0:00:17.332 ********** 2026-04-06 02:21:18.949519 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:21:18.949535 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:21:18.949544 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:21:18.949553 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:21:18.949562 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:21:18.949570 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:21:18.949579 | orchestrator | 2026-04-06 02:21:18.949588 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-06 02:21:18.949597 | orchestrator | Monday 06 April 2026 02:21:16 +0000 (0:00:00.956) 0:00:18.288 ********** 2026-04-06 02:21:18.949606 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:21:18.949614 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:21:18.949623 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:21:18.949631 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:21:18.949640 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:21:18.949649 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:21:18.949657 | orchestrator | 2026-04-06 02:21:18.949666 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-06 02:21:18.949677 | orchestrator | Monday 06 April 2026 02:21:17 +0000 (0:00:01.258) 0:00:19.547 ********** 2026-04-06 02:21:18.949686 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:21:18.949694 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:21:18.949703 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:21:18.949712 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:21:18.949721 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:21:18.949729 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:21:18.949738 | orchestrator | 2026-04-06 02:21:18.949746 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-06 02:21:18.949755 | orchestrator | Monday 06 April 2026 02:21:18 +0000 (0:00:00.592) 0:00:20.139 ********** 2026-04-06 02:21:18.949764 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-06 02:21:18.949779 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-06 02:21:18.949788 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:21:18.949796 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-06 02:21:18.949811 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-06 02:21:18.949819 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:21:18.949828 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-06 02:21:18.949836 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-06 02:21:18.949845 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:21:18.949854 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-06 02:21:18.949862 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-06 02:21:18.949871 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:21:18.949880 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-06 02:21:18.949888 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-06 02:21:18.949899 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:21:18.949914 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-06 02:21:18.949928 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-06 02:21:18.949941 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:21:18.949952 | orchestrator | 2026-04-06 02:21:18.949968 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-06 02:21:18.950145 | orchestrator | Monday 06 April 2026 02:21:18 +0000 (0:00:00.897) 0:00:21.036 ********** 2026-04-06 02:22:36.507476 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:22:36.507574 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:22:36.507584 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:22:36.507592 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:22:36.507598 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:22:36.507605 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:22:36.507612 | orchestrator | 2026-04-06 02:22:36.507620 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-06 02:22:36.507630 | orchestrator | Monday 06 April 2026 02:21:19 +0000 (0:00:00.686) 0:00:21.723 ********** 2026-04-06 02:22:36.507636 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:22:36.507643 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:22:36.507649 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:22:36.507656 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:22:36.507662 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:22:36.507668 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:22:36.507674 | orchestrator | 2026-04-06 02:22:36.507681 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-06 02:22:36.507687 | orchestrator | 2026-04-06 02:22:36.507693 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-06 02:22:36.507701 | orchestrator | Monday 06 April 2026 02:21:21 +0000 (0:00:01.648) 0:00:23.372 ********** 2026-04-06 02:22:36.507708 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:22:36.507715 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:22:36.507721 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:22:36.507727 | orchestrator | 2026-04-06 02:22:36.507733 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-06 02:22:36.507739 | orchestrator | Monday 06 April 2026 02:21:23 +0000 (0:00:02.028) 0:00:25.400 ********** 2026-04-06 02:22:36.507745 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:22:36.507751 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:22:36.507757 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:22:36.507763 | orchestrator | 2026-04-06 02:22:36.507769 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-06 02:22:36.507774 | orchestrator | Monday 06 April 2026 02:21:24 +0000 (0:00:01.306) 0:00:26.707 ********** 2026-04-06 02:22:36.507780 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:22:36.507785 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:22:36.507792 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:22:36.507798 | orchestrator | 2026-04-06 02:22:36.507804 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-06 02:22:36.507833 | orchestrator | Monday 06 April 2026 02:21:25 +0000 (0:00:00.866) 0:00:27.574 ********** 2026-04-06 02:22:36.507839 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:22:36.507844 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:22:36.507849 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:22:36.507855 | orchestrator | 2026-04-06 02:22:36.507861 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-06 02:22:36.507867 | orchestrator | Monday 06 April 2026 02:21:26 +0000 (0:00:00.745) 0:00:28.319 ********** 2026-04-06 02:22:36.507873 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:22:36.507878 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:22:36.507884 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:22:36.507889 | orchestrator | 2026-04-06 02:22:36.507895 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-06 02:22:36.507916 | orchestrator | Monday 06 April 2026 02:21:26 +0000 (0:00:00.386) 0:00:28.706 ********** 2026-04-06 02:22:36.507922 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:22:36.507927 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:22:36.507933 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:22:36.507938 | orchestrator | 2026-04-06 02:22:36.507944 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-06 02:22:36.507949 | orchestrator | Monday 06 April 2026 02:21:27 +0000 (0:00:00.996) 0:00:29.702 ********** 2026-04-06 02:22:36.508033 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:22:36.508042 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:22:36.508049 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:22:36.508055 | orchestrator | 2026-04-06 02:22:36.508062 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-06 02:22:36.508068 | orchestrator | Monday 06 April 2026 02:21:29 +0000 (0:00:02.157) 0:00:31.860 ********** 2026-04-06 02:22:36.508075 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:22:36.508083 | orchestrator | 2026-04-06 02:22:36.508090 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-06 02:22:36.508096 | orchestrator | Monday 06 April 2026 02:21:30 +0000 (0:00:00.554) 0:00:32.415 ********** 2026-04-06 02:22:36.508102 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:22:36.508109 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:22:36.508115 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:22:36.508122 | orchestrator | 2026-04-06 02:22:36.508128 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-06 02:22:36.508135 | orchestrator | Monday 06 April 2026 02:21:32 +0000 (0:00:02.150) 0:00:34.565 ********** 2026-04-06 02:22:36.508141 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:22:36.508147 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:22:36.508154 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:22:36.508160 | orchestrator | 2026-04-06 02:22:36.508166 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-06 02:22:36.508173 | orchestrator | Monday 06 April 2026 02:21:32 +0000 (0:00:00.540) 0:00:35.105 ********** 2026-04-06 02:22:36.508179 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:22:36.508185 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:22:36.508191 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:22:36.508198 | orchestrator | 2026-04-06 02:22:36.508204 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-06 02:22:36.508211 | orchestrator | Monday 06 April 2026 02:21:34 +0000 (0:00:01.109) 0:00:36.215 ********** 2026-04-06 02:22:36.508217 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:22:36.508224 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:22:36.508230 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:22:36.508236 | orchestrator | 2026-04-06 02:22:36.508243 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-06 02:22:36.508267 | orchestrator | Monday 06 April 2026 02:21:35 +0000 (0:00:01.748) 0:00:37.964 ********** 2026-04-06 02:22:36.508275 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:22:36.508291 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:22:36.508298 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:22:36.508304 | orchestrator | 2026-04-06 02:22:36.508310 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-06 02:22:36.508317 | orchestrator | Monday 06 April 2026 02:21:36 +0000 (0:00:00.590) 0:00:38.555 ********** 2026-04-06 02:22:36.508323 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:22:36.508330 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:22:36.508336 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:22:36.508343 | orchestrator | 2026-04-06 02:22:36.508349 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-06 02:22:36.508355 | orchestrator | Monday 06 April 2026 02:21:36 +0000 (0:00:00.347) 0:00:38.902 ********** 2026-04-06 02:22:36.508362 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:22:36.508368 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:22:36.508374 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:22:36.508380 | orchestrator | 2026-04-06 02:22:36.508393 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-06 02:22:36.508400 | orchestrator | Monday 06 April 2026 02:21:38 +0000 (0:00:01.498) 0:00:40.401 ********** 2026-04-06 02:22:36.508406 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:22:36.508412 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:22:36.508418 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:22:36.508424 | orchestrator | 2026-04-06 02:22:36.508430 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-06 02:22:36.508437 | orchestrator | Monday 06 April 2026 02:21:40 +0000 (0:00:02.477) 0:00:42.878 ********** 2026-04-06 02:22:36.508443 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:22:36.508449 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:22:36.508455 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:22:36.508465 | orchestrator | 2026-04-06 02:22:36.508471 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-06 02:22:36.508478 | orchestrator | Monday 06 April 2026 02:21:41 +0000 (0:00:00.371) 0:00:43.249 ********** 2026-04-06 02:22:36.508484 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-06 02:22:36.508492 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-06 02:22:36.508498 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-06 02:22:36.508505 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-06 02:22:36.508511 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-06 02:22:36.508518 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-06 02:22:36.508524 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-06 02:22:36.508530 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-06 02:22:36.508537 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-06 02:22:36.508543 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-06 02:22:36.508549 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-06 02:22:36.508560 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-06 02:22:36.508567 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-06 02:22:36.508573 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-06 02:22:36.508579 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-06 02:22:36.508585 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:22:36.508591 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:22:36.508598 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:22:36.508604 | orchestrator | 2026-04-06 02:22:36.508614 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-06 02:22:36.508620 | orchestrator | Monday 06 April 2026 02:22:35 +0000 (0:00:53.940) 0:01:37.190 ********** 2026-04-06 02:22:36.508626 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:22:36.508633 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:22:36.508639 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:22:36.508645 | orchestrator | 2026-04-06 02:22:36.508651 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-06 02:22:36.508658 | orchestrator | Monday 06 April 2026 02:22:35 +0000 (0:00:00.361) 0:01:37.551 ********** 2026-04-06 02:22:36.508670 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:23:19.481414 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:23:19.481499 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:23:19.481506 | orchestrator | 2026-04-06 02:23:19.481512 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-06 02:23:19.481529 | orchestrator | Monday 06 April 2026 02:22:36 +0000 (0:00:01.052) 0:01:38.604 ********** 2026-04-06 02:23:19.481534 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:23:19.481538 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:23:19.481543 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:23:19.481547 | orchestrator | 2026-04-06 02:23:19.481552 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-06 02:23:19.481556 | orchestrator | Monday 06 April 2026 02:22:37 +0000 (0:00:01.256) 0:01:39.860 ********** 2026-04-06 02:23:19.481561 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:23:19.481566 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:23:19.481570 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:23:19.481574 | orchestrator | 2026-04-06 02:23:19.481578 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-06 02:23:19.481583 | orchestrator | Monday 06 April 2026 02:23:03 +0000 (0:00:25.872) 0:02:05.733 ********** 2026-04-06 02:23:19.481587 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:23:19.481592 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:23:19.481597 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:23:19.481601 | orchestrator | 2026-04-06 02:23:19.481606 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-06 02:23:19.481610 | orchestrator | Monday 06 April 2026 02:23:04 +0000 (0:00:00.671) 0:02:06.404 ********** 2026-04-06 02:23:19.481614 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:23:19.481619 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:23:19.481623 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:23:19.481627 | orchestrator | 2026-04-06 02:23:19.481631 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-06 02:23:19.481635 | orchestrator | Monday 06 April 2026 02:23:04 +0000 (0:00:00.685) 0:02:07.090 ********** 2026-04-06 02:23:19.481640 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:23:19.481644 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:23:19.481648 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:23:19.481652 | orchestrator | 2026-04-06 02:23:19.481656 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-06 02:23:19.481678 | orchestrator | Monday 06 April 2026 02:23:05 +0000 (0:00:00.667) 0:02:07.757 ********** 2026-04-06 02:23:19.481682 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:23:19.481687 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:23:19.481691 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:23:19.481695 | orchestrator | 2026-04-06 02:23:19.481699 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-06 02:23:19.481703 | orchestrator | Monday 06 April 2026 02:23:06 +0000 (0:00:00.827) 0:02:08.585 ********** 2026-04-06 02:23:19.481707 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:23:19.481712 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:23:19.481716 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:23:19.481720 | orchestrator | 2026-04-06 02:23:19.481724 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-06 02:23:19.481728 | orchestrator | Monday 06 April 2026 02:23:06 +0000 (0:00:00.337) 0:02:08.923 ********** 2026-04-06 02:23:19.481732 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:23:19.481737 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:23:19.481741 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:23:19.481745 | orchestrator | 2026-04-06 02:23:19.481749 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-06 02:23:19.481753 | orchestrator | Monday 06 April 2026 02:23:07 +0000 (0:00:00.636) 0:02:09.560 ********** 2026-04-06 02:23:19.481758 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:23:19.481762 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:23:19.481766 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:23:19.481770 | orchestrator | 2026-04-06 02:23:19.481775 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-06 02:23:19.481779 | orchestrator | Monday 06 April 2026 02:23:08 +0000 (0:00:00.656) 0:02:10.216 ********** 2026-04-06 02:23:19.481783 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:23:19.481787 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:23:19.481791 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:23:19.481795 | orchestrator | 2026-04-06 02:23:19.481800 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-06 02:23:19.481804 | orchestrator | Monday 06 April 2026 02:23:09 +0000 (0:00:00.967) 0:02:11.183 ********** 2026-04-06 02:23:19.481811 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:23:19.481815 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:23:19.481819 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:23:19.481823 | orchestrator | 2026-04-06 02:23:19.481827 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-06 02:23:19.481832 | orchestrator | Monday 06 April 2026 02:23:10 +0000 (0:00:01.258) 0:02:12.442 ********** 2026-04-06 02:23:19.481836 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:23:19.481840 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:23:19.481844 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:23:19.481848 | orchestrator | 2026-04-06 02:23:19.481852 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-06 02:23:19.481856 | orchestrator | Monday 06 April 2026 02:23:10 +0000 (0:00:00.308) 0:02:12.750 ********** 2026-04-06 02:23:19.481861 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:23:19.481865 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:23:19.481869 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:23:19.481873 | orchestrator | 2026-04-06 02:23:19.481877 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-06 02:23:19.481881 | orchestrator | Monday 06 April 2026 02:23:10 +0000 (0:00:00.332) 0:02:13.082 ********** 2026-04-06 02:23:19.481886 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:23:19.481890 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:23:19.481894 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:23:19.481898 | orchestrator | 2026-04-06 02:23:19.481902 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-06 02:23:19.481907 | orchestrator | Monday 06 April 2026 02:23:11 +0000 (0:00:00.631) 0:02:13.714 ********** 2026-04-06 02:23:19.481914 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:23:19.481918 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:23:19.481933 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:23:19.481985 | orchestrator | 2026-04-06 02:23:19.481993 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-06 02:23:19.482002 | orchestrator | Monday 06 April 2026 02:23:12 +0000 (0:00:00.924) 0:02:14.639 ********** 2026-04-06 02:23:19.482009 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-06 02:23:19.482052 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-06 02:23:19.482059 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-06 02:23:19.482063 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-06 02:23:19.482069 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-06 02:23:19.482073 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-06 02:23:19.482078 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-06 02:23:19.482084 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-06 02:23:19.482089 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-06 02:23:19.482094 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-06 02:23:19.482099 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-06 02:23:19.482103 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-06 02:23:19.482108 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-06 02:23:19.482113 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-06 02:23:19.482117 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-06 02:23:19.482122 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-06 02:23:19.482127 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-06 02:23:19.482131 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-06 02:23:19.482136 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-06 02:23:19.482141 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-06 02:23:19.482146 | orchestrator | 2026-04-06 02:23:19.482151 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-06 02:23:19.482155 | orchestrator | 2026-04-06 02:23:19.482160 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-06 02:23:19.482165 | orchestrator | Monday 06 April 2026 02:23:15 +0000 (0:00:02.977) 0:02:17.616 ********** 2026-04-06 02:23:19.482170 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:23:19.482174 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:23:19.482179 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:23:19.482183 | orchestrator | 2026-04-06 02:23:19.482200 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-06 02:23:19.482208 | orchestrator | Monday 06 April 2026 02:23:15 +0000 (0:00:00.386) 0:02:18.003 ********** 2026-04-06 02:23:19.482215 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:23:19.482222 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:23:19.482228 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:23:19.482240 | orchestrator | 2026-04-06 02:23:19.482247 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-06 02:23:19.482253 | orchestrator | Monday 06 April 2026 02:23:17 +0000 (0:00:01.551) 0:02:19.555 ********** 2026-04-06 02:23:19.482259 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:23:19.482265 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:23:19.482272 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:23:19.482278 | orchestrator | 2026-04-06 02:23:19.482285 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-06 02:23:19.482293 | orchestrator | Monday 06 April 2026 02:23:17 +0000 (0:00:00.375) 0:02:19.931 ********** 2026-04-06 02:23:19.482299 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:23:19.482306 | orchestrator | 2026-04-06 02:23:19.482312 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-06 02:23:19.482316 | orchestrator | Monday 06 April 2026 02:23:18 +0000 (0:00:00.555) 0:02:20.486 ********** 2026-04-06 02:23:19.482321 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:23:19.482325 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:23:19.482329 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:23:19.482333 | orchestrator | 2026-04-06 02:23:19.482338 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-06 02:23:19.482345 | orchestrator | Monday 06 April 2026 02:23:18 +0000 (0:00:00.546) 0:02:21.032 ********** 2026-04-06 02:23:19.482351 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:23:19.482358 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:23:19.482364 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:23:19.482371 | orchestrator | 2026-04-06 02:23:19.482378 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-06 02:23:19.482385 | orchestrator | Monday 06 April 2026 02:23:19 +0000 (0:00:00.345) 0:02:21.377 ********** 2026-04-06 02:23:19.482399 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:25:01.409380 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:25:01.409481 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:25:01.409490 | orchestrator | 2026-04-06 02:25:01.409495 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-06 02:25:01.409501 | orchestrator | Monday 06 April 2026 02:23:19 +0000 (0:00:00.346) 0:02:21.724 ********** 2026-04-06 02:25:01.409505 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:25:01.409509 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:25:01.409513 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:25:01.409517 | orchestrator | 2026-04-06 02:25:01.409521 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-06 02:25:01.409526 | orchestrator | Monday 06 April 2026 02:23:20 +0000 (0:00:00.629) 0:02:22.354 ********** 2026-04-06 02:25:01.409530 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:25:01.409534 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:25:01.409538 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:25:01.409542 | orchestrator | 2026-04-06 02:25:01.409546 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-06 02:25:01.409550 | orchestrator | Monday 06 April 2026 02:23:21 +0000 (0:00:01.312) 0:02:23.666 ********** 2026-04-06 02:25:01.409553 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:25:01.409557 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:25:01.409561 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:25:01.409565 | orchestrator | 2026-04-06 02:25:01.409569 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-06 02:25:01.409573 | orchestrator | Monday 06 April 2026 02:23:22 +0000 (0:00:01.312) 0:02:24.978 ********** 2026-04-06 02:25:01.409579 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:25:01.409585 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:25:01.409591 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:25:01.409597 | orchestrator | 2026-04-06 02:25:01.409603 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-06 02:25:01.409631 | orchestrator | 2026-04-06 02:25:01.409638 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-06 02:25:01.409644 | orchestrator | Monday 06 April 2026 02:23:33 +0000 (0:00:10.137) 0:02:35.116 ********** 2026-04-06 02:25:01.409650 | orchestrator | ok: [testbed-manager] 2026-04-06 02:25:01.409658 | orchestrator | 2026-04-06 02:25:01.409664 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-06 02:25:01.409670 | orchestrator | Monday 06 April 2026 02:23:33 +0000 (0:00:00.823) 0:02:35.940 ********** 2026-04-06 02:25:01.409676 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:01.409682 | orchestrator | 2026-04-06 02:25:01.409689 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-06 02:25:01.409695 | orchestrator | Monday 06 April 2026 02:23:34 +0000 (0:00:00.751) 0:02:36.692 ********** 2026-04-06 02:25:01.409701 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-06 02:25:01.409707 | orchestrator | 2026-04-06 02:25:01.409714 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-06 02:25:01.409720 | orchestrator | Monday 06 April 2026 02:23:35 +0000 (0:00:00.538) 0:02:37.231 ********** 2026-04-06 02:25:01.409726 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:01.409733 | orchestrator | 2026-04-06 02:25:01.409739 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-06 02:25:01.409745 | orchestrator | Monday 06 April 2026 02:23:36 +0000 (0:00:00.932) 0:02:38.163 ********** 2026-04-06 02:25:01.409751 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:01.409757 | orchestrator | 2026-04-06 02:25:01.409764 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-06 02:25:01.409768 | orchestrator | Monday 06 April 2026 02:23:36 +0000 (0:00:00.678) 0:02:38.841 ********** 2026-04-06 02:25:01.409772 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-06 02:25:01.409776 | orchestrator | 2026-04-06 02:25:01.409780 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-06 02:25:01.409783 | orchestrator | Monday 06 April 2026 02:23:38 +0000 (0:00:01.710) 0:02:40.552 ********** 2026-04-06 02:25:01.409787 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-06 02:25:01.409791 | orchestrator | 2026-04-06 02:25:01.409810 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-06 02:25:01.409814 | orchestrator | Monday 06 April 2026 02:23:39 +0000 (0:00:00.973) 0:02:41.525 ********** 2026-04-06 02:25:01.409818 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:01.409822 | orchestrator | 2026-04-06 02:25:01.409826 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-06 02:25:01.409829 | orchestrator | Monday 06 April 2026 02:23:39 +0000 (0:00:00.492) 0:02:42.018 ********** 2026-04-06 02:25:01.409833 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:01.409837 | orchestrator | 2026-04-06 02:25:01.409841 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-06 02:25:01.409845 | orchestrator | 2026-04-06 02:25:01.409848 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-06 02:25:01.409853 | orchestrator | Monday 06 April 2026 02:23:40 +0000 (0:00:00.488) 0:02:42.506 ********** 2026-04-06 02:25:01.409865 | orchestrator | ok: [testbed-manager] 2026-04-06 02:25:01.409869 | orchestrator | 2026-04-06 02:25:01.409873 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-06 02:25:01.409877 | orchestrator | Monday 06 April 2026 02:23:40 +0000 (0:00:00.162) 0:02:42.669 ********** 2026-04-06 02:25:01.409880 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-06 02:25:01.409931 | orchestrator | 2026-04-06 02:25:01.409936 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-06 02:25:01.409941 | orchestrator | Monday 06 April 2026 02:23:41 +0000 (0:00:00.476) 0:02:43.145 ********** 2026-04-06 02:25:01.409945 | orchestrator | ok: [testbed-manager] 2026-04-06 02:25:01.409950 | orchestrator | 2026-04-06 02:25:01.409960 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-06 02:25:01.409964 | orchestrator | Monday 06 April 2026 02:23:41 +0000 (0:00:00.901) 0:02:44.047 ********** 2026-04-06 02:25:01.409969 | orchestrator | ok: [testbed-manager] 2026-04-06 02:25:01.409974 | orchestrator | 2026-04-06 02:25:01.409994 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-06 02:25:01.410000 | orchestrator | Monday 06 April 2026 02:23:43 +0000 (0:00:01.785) 0:02:45.833 ********** 2026-04-06 02:25:01.410007 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:01.410056 | orchestrator | 2026-04-06 02:25:01.410065 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-06 02:25:01.410071 | orchestrator | Monday 06 April 2026 02:23:44 +0000 (0:00:00.785) 0:02:46.619 ********** 2026-04-06 02:25:01.410078 | orchestrator | ok: [testbed-manager] 2026-04-06 02:25:01.410084 | orchestrator | 2026-04-06 02:25:01.410089 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-06 02:25:01.410093 | orchestrator | Monday 06 April 2026 02:23:44 +0000 (0:00:00.466) 0:02:47.085 ********** 2026-04-06 02:25:01.410098 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:01.410102 | orchestrator | 2026-04-06 02:25:01.410106 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-06 02:25:01.410111 | orchestrator | Monday 06 April 2026 02:23:53 +0000 (0:00:08.583) 0:02:55.669 ********** 2026-04-06 02:25:01.410115 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:01.410120 | orchestrator | 2026-04-06 02:25:01.410124 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-06 02:25:01.410128 | orchestrator | Monday 06 April 2026 02:24:07 +0000 (0:00:13.450) 0:03:09.120 ********** 2026-04-06 02:25:01.410133 | orchestrator | ok: [testbed-manager] 2026-04-06 02:25:01.410137 | orchestrator | 2026-04-06 02:25:01.410142 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-06 02:25:01.410146 | orchestrator | 2026-04-06 02:25:01.410150 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-06 02:25:01.410155 | orchestrator | Monday 06 April 2026 02:24:07 +0000 (0:00:00.852) 0:03:09.973 ********** 2026-04-06 02:25:01.410159 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:25:01.410164 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:25:01.410168 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:25:01.410172 | orchestrator | 2026-04-06 02:25:01.410177 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-06 02:25:01.410181 | orchestrator | Monday 06 April 2026 02:24:08 +0000 (0:00:00.318) 0:03:10.291 ********** 2026-04-06 02:25:01.410185 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:25:01.410189 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:25:01.410194 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:25:01.410198 | orchestrator | 2026-04-06 02:25:01.410203 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-06 02:25:01.410208 | orchestrator | Monday 06 April 2026 02:24:08 +0000 (0:00:00.380) 0:03:10.672 ********** 2026-04-06 02:25:01.410216 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:25:01.410221 | orchestrator | 2026-04-06 02:25:01.410226 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-06 02:25:01.410230 | orchestrator | Monday 06 April 2026 02:24:09 +0000 (0:00:00.840) 0:03:11.513 ********** 2026-04-06 02:25:01.410235 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-06 02:25:01.410239 | orchestrator | 2026-04-06 02:25:01.410244 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-06 02:25:01.410248 | orchestrator | Monday 06 April 2026 02:24:10 +0000 (0:00:00.981) 0:03:12.495 ********** 2026-04-06 02:25:01.410252 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 02:25:01.410257 | orchestrator | 2026-04-06 02:25:01.410261 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-06 02:25:01.410271 | orchestrator | Monday 06 April 2026 02:24:11 +0000 (0:00:00.901) 0:03:13.397 ********** 2026-04-06 02:25:01.410275 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:25:01.410279 | orchestrator | 2026-04-06 02:25:01.410282 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-06 02:25:01.410286 | orchestrator | Monday 06 April 2026 02:24:11 +0000 (0:00:00.155) 0:03:13.553 ********** 2026-04-06 02:25:01.410290 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 02:25:01.410294 | orchestrator | 2026-04-06 02:25:01.410297 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-06 02:25:01.410301 | orchestrator | Monday 06 April 2026 02:24:12 +0000 (0:00:01.101) 0:03:14.654 ********** 2026-04-06 02:25:01.410305 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:25:01.410309 | orchestrator | 2026-04-06 02:25:01.410312 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-06 02:25:01.410316 | orchestrator | Monday 06 April 2026 02:24:12 +0000 (0:00:00.138) 0:03:14.793 ********** 2026-04-06 02:25:01.410320 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:25:01.410324 | orchestrator | 2026-04-06 02:25:01.410327 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-06 02:25:01.410331 | orchestrator | Monday 06 April 2026 02:24:12 +0000 (0:00:00.134) 0:03:14.927 ********** 2026-04-06 02:25:01.410335 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:25:01.410339 | orchestrator | 2026-04-06 02:25:01.410342 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-06 02:25:01.410350 | orchestrator | Monday 06 April 2026 02:24:12 +0000 (0:00:00.138) 0:03:15.065 ********** 2026-04-06 02:25:01.410354 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:25:01.410358 | orchestrator | 2026-04-06 02:25:01.410361 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-06 02:25:01.410365 | orchestrator | Monday 06 April 2026 02:24:13 +0000 (0:00:00.125) 0:03:15.190 ********** 2026-04-06 02:25:01.410369 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-06 02:25:01.410373 | orchestrator | 2026-04-06 02:25:01.410376 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-06 02:25:01.410380 | orchestrator | Monday 06 April 2026 02:24:18 +0000 (0:00:05.790) 0:03:20.981 ********** 2026-04-06 02:25:01.410384 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-06 02:25:01.410388 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-06 02:25:01.410397 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-06 02:25:26.529786 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-06 02:25:26.529941 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-06 02:25:26.529971 | orchestrator | 2026-04-06 02:25:26.529986 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-06 02:25:26.529999 | orchestrator | Monday 06 April 2026 02:25:01 +0000 (0:00:42.523) 0:04:03.504 ********** 2026-04-06 02:25:26.530011 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 02:25:26.530218 | orchestrator | 2026-04-06 02:25:26.530230 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-06 02:25:26.530240 | orchestrator | Monday 06 April 2026 02:25:02 +0000 (0:00:01.321) 0:04:04.826 ********** 2026-04-06 02:25:26.530253 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-06 02:25:26.530265 | orchestrator | 2026-04-06 02:25:26.530276 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-06 02:25:26.530287 | orchestrator | Monday 06 April 2026 02:25:04 +0000 (0:00:01.723) 0:04:06.550 ********** 2026-04-06 02:25:26.530299 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-06 02:25:26.530340 | orchestrator | 2026-04-06 02:25:26.530353 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-06 02:25:26.530367 | orchestrator | Monday 06 April 2026 02:25:05 +0000 (0:00:01.369) 0:04:07.920 ********** 2026-04-06 02:25:26.530480 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:25:26.530499 | orchestrator | 2026-04-06 02:25:26.530511 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-06 02:25:26.530558 | orchestrator | Monday 06 April 2026 02:25:05 +0000 (0:00:00.146) 0:04:08.066 ********** 2026-04-06 02:25:26.530570 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-06 02:25:26.530623 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-06 02:25:26.530638 | orchestrator | 2026-04-06 02:25:26.530652 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-06 02:25:26.530669 | orchestrator | Monday 06 April 2026 02:25:08 +0000 (0:00:02.046) 0:04:10.113 ********** 2026-04-06 02:25:26.530684 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:25:26.530706 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:25:26.530721 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:25:26.530732 | orchestrator | 2026-04-06 02:25:26.530744 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-06 02:25:26.530756 | orchestrator | Monday 06 April 2026 02:25:08 +0000 (0:00:00.316) 0:04:10.429 ********** 2026-04-06 02:25:26.530798 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:25:26.530814 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:25:26.530934 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:25:26.530954 | orchestrator | 2026-04-06 02:25:26.530966 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-06 02:25:26.530978 | orchestrator | 2026-04-06 02:25:26.530991 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-06 02:25:26.531003 | orchestrator | Monday 06 April 2026 02:25:09 +0000 (0:00:00.981) 0:04:11.410 ********** 2026-04-06 02:25:26.531015 | orchestrator | ok: [testbed-manager] 2026-04-06 02:25:26.531028 | orchestrator | 2026-04-06 02:25:26.531040 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-06 02:25:26.531052 | orchestrator | Monday 06 April 2026 02:25:09 +0000 (0:00:00.388) 0:04:11.799 ********** 2026-04-06 02:25:26.531065 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-06 02:25:26.531076 | orchestrator | 2026-04-06 02:25:26.531088 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-06 02:25:26.531099 | orchestrator | Monday 06 April 2026 02:25:09 +0000 (0:00:00.281) 0:04:12.080 ********** 2026-04-06 02:25:26.531109 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:26.531130 | orchestrator | 2026-04-06 02:25:26.531142 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-06 02:25:26.531153 | orchestrator | 2026-04-06 02:25:26.531164 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-06 02:25:26.531174 | orchestrator | Monday 06 April 2026 02:25:15 +0000 (0:00:05.587) 0:04:17.668 ********** 2026-04-06 02:25:26.531216 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:25:26.531227 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:25:26.531238 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:25:26.531249 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:25:26.531260 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:25:26.531270 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:25:26.531307 | orchestrator | 2026-04-06 02:25:26.531318 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-06 02:25:26.531329 | orchestrator | Monday 06 April 2026 02:25:16 +0000 (0:00:00.660) 0:04:18.328 ********** 2026-04-06 02:25:26.531341 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-06 02:25:26.531353 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-06 02:25:26.531424 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-06 02:25:26.531437 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-06 02:25:26.531466 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-06 02:25:26.531478 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-06 02:25:26.531491 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-06 02:25:26.531503 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-06 02:25:26.531515 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-06 02:25:26.531554 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-06 02:25:26.531567 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-06 02:25:26.531580 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-06 02:25:26.531592 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-06 02:25:26.531605 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-06 02:25:26.531617 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-06 02:25:26.531642 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-06 02:25:26.531657 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-06 02:25:26.531669 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-06 02:25:26.531680 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-06 02:25:26.531692 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-06 02:25:26.531704 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-06 02:25:26.531716 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-06 02:25:26.531728 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-06 02:25:26.531739 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-06 02:25:26.531751 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-06 02:25:26.531763 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-06 02:25:26.531775 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-06 02:25:26.531787 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-06 02:25:26.531799 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-06 02:25:26.531812 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-06 02:25:26.531826 | orchestrator | 2026-04-06 02:25:26.531839 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-06 02:25:26.531851 | orchestrator | Monday 06 April 2026 02:25:25 +0000 (0:00:08.913) 0:04:27.242 ********** 2026-04-06 02:25:26.531862 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:25:26.531941 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:25:26.531955 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:25:26.531966 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:25:26.531977 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:25:26.531989 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:25:26.532001 | orchestrator | 2026-04-06 02:25:26.532012 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-06 02:25:26.532024 | orchestrator | Monday 06 April 2026 02:25:25 +0000 (0:00:00.572) 0:04:27.815 ********** 2026-04-06 02:25:26.532035 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:25:26.532057 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:25:26.532068 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:25:26.532079 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:25:26.532090 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:25:26.532101 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:25:26.532111 | orchestrator | 2026-04-06 02:25:26.532122 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:25:26.532133 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:25:26.532147 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-06 02:25:26.532169 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-06 02:25:26.532180 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-06 02:25:26.532193 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 02:25:26.532205 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 02:25:26.532286 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 02:25:26.532301 | orchestrator | 2026-04-06 02:25:26.532314 | orchestrator | 2026-04-06 02:25:26.532326 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:25:26.532404 | orchestrator | Monday 06 April 2026 02:25:26 +0000 (0:00:00.805) 0:04:28.621 ********** 2026-04-06 02:25:26.532431 | orchestrator | =============================================================================== 2026-04-06 02:25:27.079077 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.94s 2026-04-06 02:25:27.079155 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.52s 2026-04-06 02:25:27.079164 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.87s 2026-04-06 02:25:27.079171 | orchestrator | kubectl : Install required packages ------------------------------------ 13.45s 2026-04-06 02:25:27.079178 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.14s 2026-04-06 02:25:27.079185 | orchestrator | Manage labels ----------------------------------------------------------- 8.91s 2026-04-06 02:25:27.079191 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.58s 2026-04-06 02:25:27.079198 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.79s 2026-04-06 02:25:27.079205 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.59s 2026-04-06 02:25:27.079212 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 4.91s 2026-04-06 02:25:27.079218 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.98s 2026-04-06 02:25:27.079226 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.48s 2026-04-06 02:25:27.079233 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.16s 2026-04-06 02:25:27.079239 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.15s 2026-04-06 02:25:27.079246 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.06s 2026-04-06 02:25:27.079253 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.05s 2026-04-06 02:25:27.079260 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.03s 2026-04-06 02:25:27.079292 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.99s 2026-04-06 02:25:27.079297 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.79s 2026-04-06 02:25:27.079301 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.75s 2026-04-06 02:25:27.466295 | orchestrator | + osism apply copy-kubeconfig 2026-04-06 02:25:39.637669 | orchestrator | 2026-04-06 02:25:39 | INFO  | Task f3ea8399-fd25-4038-ba5f-64d2fcba9813 (copy-kubeconfig) was prepared for execution. 2026-04-06 02:25:39.637775 | orchestrator | 2026-04-06 02:25:39 | INFO  | It takes a moment until task f3ea8399-fd25-4038-ba5f-64d2fcba9813 (copy-kubeconfig) has been started and output is visible here. 2026-04-06 02:25:47.262815 | orchestrator | 2026-04-06 02:25:47.262931 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-06 02:25:47.262942 | orchestrator | 2026-04-06 02:25:47.262949 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-06 02:25:47.262956 | orchestrator | Monday 06 April 2026 02:25:44 +0000 (0:00:00.176) 0:00:00.176 ********** 2026-04-06 02:25:47.262963 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-06 02:25:47.262969 | orchestrator | 2026-04-06 02:25:47.262975 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-06 02:25:47.262999 | orchestrator | Monday 06 April 2026 02:25:45 +0000 (0:00:00.747) 0:00:00.924 ********** 2026-04-06 02:25:47.263005 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:47.263013 | orchestrator | 2026-04-06 02:25:47.263019 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-06 02:25:47.263026 | orchestrator | Monday 06 April 2026 02:25:46 +0000 (0:00:01.320) 0:00:02.244 ********** 2026-04-06 02:25:47.263035 | orchestrator | changed: [testbed-manager] 2026-04-06 02:25:47.263042 | orchestrator | 2026-04-06 02:25:47.263051 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:25:47.263057 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:25:47.263065 | orchestrator | 2026-04-06 02:25:47.263071 | orchestrator | 2026-04-06 02:25:47.263077 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:25:47.263083 | orchestrator | Monday 06 April 2026 02:25:46 +0000 (0:00:00.484) 0:00:02.729 ********** 2026-04-06 02:25:47.263089 | orchestrator | =============================================================================== 2026-04-06 02:25:47.263095 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.32s 2026-04-06 02:25:47.263101 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2026-04-06 02:25:47.263107 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2026-04-06 02:25:47.645989 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-04-06 02:25:59.999771 | orchestrator | 2026-04-06 02:25:59 | INFO  | Task ba822c21-3201-4a7e-ab89-912377799f89 (openstackclient) was prepared for execution. 2026-04-06 02:25:59.999932 | orchestrator | 2026-04-06 02:25:59 | INFO  | It takes a moment until task ba822c21-3201-4a7e-ab89-912377799f89 (openstackclient) has been started and output is visible here. 2026-04-06 02:26:50.595877 | orchestrator | 2026-04-06 02:26:50.595990 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-06 02:26:50.596014 | orchestrator | 2026-04-06 02:26:50.596776 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-06 02:26:50.596851 | orchestrator | Monday 06 April 2026 02:26:04 +0000 (0:00:00.259) 0:00:00.259 ********** 2026-04-06 02:26:50.596860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-06 02:26:50.596867 | orchestrator | 2026-04-06 02:26:50.596892 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-06 02:26:50.596898 | orchestrator | Monday 06 April 2026 02:26:05 +0000 (0:00:00.242) 0:00:00.501 ********** 2026-04-06 02:26:50.596903 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-06 02:26:50.596909 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-06 02:26:50.596915 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-06 02:26:50.596920 | orchestrator | 2026-04-06 02:26:50.596925 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-06 02:26:50.596931 | orchestrator | Monday 06 April 2026 02:26:06 +0000 (0:00:01.289) 0:00:01.791 ********** 2026-04-06 02:26:50.596936 | orchestrator | changed: [testbed-manager] 2026-04-06 02:26:50.596942 | orchestrator | 2026-04-06 02:26:50.596947 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-06 02:26:50.596952 | orchestrator | Monday 06 April 2026 02:26:08 +0000 (0:00:01.648) 0:00:03.440 ********** 2026-04-06 02:26:50.596957 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-06 02:26:50.596963 | orchestrator | ok: [testbed-manager] 2026-04-06 02:26:50.596969 | orchestrator | 2026-04-06 02:26:50.596974 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-06 02:26:50.596979 | orchestrator | Monday 06 April 2026 02:26:44 +0000 (0:00:36.923) 0:00:40.363 ********** 2026-04-06 02:26:50.596984 | orchestrator | changed: [testbed-manager] 2026-04-06 02:26:50.596989 | orchestrator | 2026-04-06 02:26:50.596994 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-06 02:26:50.596999 | orchestrator | Monday 06 April 2026 02:26:45 +0000 (0:00:01.021) 0:00:41.385 ********** 2026-04-06 02:26:50.597004 | orchestrator | ok: [testbed-manager] 2026-04-06 02:26:50.597008 | orchestrator | 2026-04-06 02:26:50.597014 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-06 02:26:50.597019 | orchestrator | Monday 06 April 2026 02:26:46 +0000 (0:00:00.657) 0:00:42.042 ********** 2026-04-06 02:26:50.597023 | orchestrator | changed: [testbed-manager] 2026-04-06 02:26:50.597028 | orchestrator | 2026-04-06 02:26:50.597034 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-06 02:26:50.597039 | orchestrator | Monday 06 April 2026 02:26:48 +0000 (0:00:01.637) 0:00:43.680 ********** 2026-04-06 02:26:50.597044 | orchestrator | changed: [testbed-manager] 2026-04-06 02:26:50.597049 | orchestrator | 2026-04-06 02:26:50.597054 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-06 02:26:50.597059 | orchestrator | Monday 06 April 2026 02:26:49 +0000 (0:00:00.808) 0:00:44.489 ********** 2026-04-06 02:26:50.597064 | orchestrator | changed: [testbed-manager] 2026-04-06 02:26:50.597069 | orchestrator | 2026-04-06 02:26:50.597074 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-06 02:26:50.597079 | orchestrator | Monday 06 April 2026 02:26:49 +0000 (0:00:00.594) 0:00:45.083 ********** 2026-04-06 02:26:50.597084 | orchestrator | ok: [testbed-manager] 2026-04-06 02:26:50.597089 | orchestrator | 2026-04-06 02:26:50.597094 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:26:50.597099 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:26:50.597114 | orchestrator | 2026-04-06 02:26:50.597119 | orchestrator | 2026-04-06 02:26:50.597124 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:26:50.597129 | orchestrator | Monday 06 April 2026 02:26:50 +0000 (0:00:00.474) 0:00:45.557 ********** 2026-04-06 02:26:50.597134 | orchestrator | =============================================================================== 2026-04-06 02:26:50.597147 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.92s 2026-04-06 02:26:50.597152 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.65s 2026-04-06 02:26:50.597161 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.64s 2026-04-06 02:26:50.597166 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.29s 2026-04-06 02:26:50.597172 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.02s 2026-04-06 02:26:50.597176 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.81s 2026-04-06 02:26:50.597181 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.66s 2026-04-06 02:26:50.597186 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.59s 2026-04-06 02:26:50.597191 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.47s 2026-04-06 02:26:50.597197 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.24s 2026-04-06 02:26:53.204038 | orchestrator | 2026-04-06 02:26:53 | INFO  | Task cc1eca83-8ace-4263-a7f6-64109a021147 (common) was prepared for execution. 2026-04-06 02:26:53.204163 | orchestrator | 2026-04-06 02:26:53 | INFO  | It takes a moment until task cc1eca83-8ace-4263-a7f6-64109a021147 (common) has been started and output is visible here. 2026-04-06 02:27:06.763169 | orchestrator | 2026-04-06 02:27:06.763253 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-06 02:27:06.763263 | orchestrator | 2026-04-06 02:27:06.763269 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-06 02:27:06.763276 | orchestrator | Monday 06 April 2026 02:26:57 +0000 (0:00:00.310) 0:00:00.310 ********** 2026-04-06 02:27:06.763283 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:27:06.763290 | orchestrator | 2026-04-06 02:27:06.763296 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-06 02:27:06.763302 | orchestrator | Monday 06 April 2026 02:26:59 +0000 (0:00:01.444) 0:00:01.755 ********** 2026-04-06 02:27:06.763307 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 02:27:06.763313 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 02:27:06.763319 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 02:27:06.763325 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 02:27:06.763331 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 02:27:06.763336 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 02:27:06.763342 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 02:27:06.763347 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 02:27:06.763384 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 02:27:06.763391 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 02:27:06.763396 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 02:27:06.763403 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 02:27:06.763409 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 02:27:06.763414 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 02:27:06.763420 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 02:27:06.763425 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 02:27:06.763431 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 02:27:06.763454 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 02:27:06.763460 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 02:27:06.763465 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 02:27:06.763471 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 02:27:06.763476 | orchestrator | 2026-04-06 02:27:06.763482 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-06 02:27:06.763488 | orchestrator | Monday 06 April 2026 02:27:02 +0000 (0:00:02.817) 0:00:04.573 ********** 2026-04-06 02:27:06.763494 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:27:06.763501 | orchestrator | 2026-04-06 02:27:06.763507 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-06 02:27:06.763516 | orchestrator | Monday 06 April 2026 02:27:03 +0000 (0:00:01.520) 0:00:06.093 ********** 2026-04-06 02:27:06.763524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:06.763532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:06.763555 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:06.763561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:06.763568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:06.763573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:06.763587 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:06.763597 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:06.763607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:06.763628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.884755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.884908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.884967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.884988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.885010 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.885050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.885073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.885116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.885131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.885145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.885169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:07.885196 | orchestrator | 2026-04-06 02:27:07.885211 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-06 02:27:07.885226 | orchestrator | Monday 06 April 2026 02:27:07 +0000 (0:00:03.726) 0:00:09.819 ********** 2026-04-06 02:27:07.885243 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:07.885258 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:07.885272 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:07.885286 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:27:07.885301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:07.885340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:08.602616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:08.602727 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:27:08.602790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:08.602833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:08.602844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:08.602855 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:27:08.602866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:08.602881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:08.602891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:08.602902 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:27:08.602928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:08.602948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:08.602958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:08.602968 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:27:08.602978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:08.602988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:08.602999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:08.603009 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:27:08.603019 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:08.603035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:09.577872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:09.578000 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:27:09.578109 | orchestrator | 2026-04-06 02:27:09.578132 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-06 02:27:09.578153 | orchestrator | Monday 06 April 2026 02:27:08 +0000 (0:00:01.157) 0:00:10.976 ********** 2026-04-06 02:27:09.578174 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:09.578196 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:09.578216 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:09.578234 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:27:09.578277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:09.578302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:09.578354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:09.578375 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:27:09.578432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:09.578453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:09.578473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:09.578493 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:27:09.578513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:09.578533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:09.578558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:09.578588 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:27:09.578605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:09.578648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:14.949465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:14.949540 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:27:14.949548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:14.949555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:14.949560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:14.949564 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:27:14.949568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 02:27:14.949589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:14.949594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:14.949598 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:27:14.949602 | orchestrator | 2026-04-06 02:27:14.949606 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-06 02:27:14.949612 | orchestrator | Monday 06 April 2026 02:27:10 +0000 (0:00:02.024) 0:00:13.001 ********** 2026-04-06 02:27:14.949615 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:27:14.949619 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:27:14.949623 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:27:14.949627 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:27:14.949640 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:27:14.949644 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:27:14.949648 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:27:14.949652 | orchestrator | 2026-04-06 02:27:14.949656 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-06 02:27:14.949660 | orchestrator | Monday 06 April 2026 02:27:11 +0000 (0:00:00.726) 0:00:13.728 ********** 2026-04-06 02:27:14.949663 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:27:14.949667 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:27:14.949671 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:27:14.949675 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:27:14.949679 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:27:14.949683 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:27:14.949686 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:27:14.949690 | orchestrator | 2026-04-06 02:27:14.949694 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-06 02:27:14.949698 | orchestrator | Monday 06 April 2026 02:27:12 +0000 (0:00:00.962) 0:00:14.690 ********** 2026-04-06 02:27:14.949703 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:14.949718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:14.949727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:14.949733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:14.949737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:14.949741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:14.949754 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:18.107241 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107450 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107484 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107490 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:18.107503 | orchestrator | 2026-04-06 02:27:18.107511 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-06 02:27:18.107519 | orchestrator | Monday 06 April 2026 02:27:15 +0000 (0:00:03.564) 0:00:18.254 ********** 2026-04-06 02:27:18.107525 | orchestrator | [WARNING]: Skipped 2026-04-06 02:27:18.107532 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-06 02:27:18.107540 | orchestrator | to this access issue: 2026-04-06 02:27:18.107546 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-06 02:27:18.107553 | orchestrator | directory 2026-04-06 02:27:18.107559 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 02:27:18.107567 | orchestrator | 2026-04-06 02:27:18.107573 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-06 02:27:18.107579 | orchestrator | Monday 06 April 2026 02:27:16 +0000 (0:00:01.086) 0:00:19.341 ********** 2026-04-06 02:27:18.107584 | orchestrator | [WARNING]: Skipped 2026-04-06 02:27:18.107597 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-06 02:27:28.574063 | orchestrator | to this access issue: 2026-04-06 02:27:28.574169 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-06 02:27:28.574181 | orchestrator | directory 2026-04-06 02:27:28.574189 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 02:27:28.574197 | orchestrator | 2026-04-06 02:27:28.574205 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-06 02:27:28.575101 | orchestrator | Monday 06 April 2026 02:27:18 +0000 (0:00:01.482) 0:00:20.823 ********** 2026-04-06 02:27:28.575152 | orchestrator | [WARNING]: Skipped 2026-04-06 02:27:28.575161 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-06 02:27:28.575168 | orchestrator | to this access issue: 2026-04-06 02:27:28.575175 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-06 02:27:28.575182 | orchestrator | directory 2026-04-06 02:27:28.575189 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 02:27:28.575197 | orchestrator | 2026-04-06 02:27:28.575204 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-06 02:27:28.575211 | orchestrator | Monday 06 April 2026 02:27:19 +0000 (0:00:00.912) 0:00:21.736 ********** 2026-04-06 02:27:28.575217 | orchestrator | [WARNING]: Skipped 2026-04-06 02:27:28.575224 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-06 02:27:28.575231 | orchestrator | to this access issue: 2026-04-06 02:27:28.575238 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-06 02:27:28.575245 | orchestrator | directory 2026-04-06 02:27:28.575251 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 02:27:28.575258 | orchestrator | 2026-04-06 02:27:28.575265 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-06 02:27:28.575271 | orchestrator | Monday 06 April 2026 02:27:20 +0000 (0:00:00.970) 0:00:22.707 ********** 2026-04-06 02:27:28.575278 | orchestrator | changed: [testbed-manager] 2026-04-06 02:27:28.575285 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:27:28.575292 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:27:28.575298 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:27:28.575305 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:27:28.575311 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:27:28.575332 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:27:28.575339 | orchestrator | 2026-04-06 02:27:28.575346 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-06 02:27:28.575353 | orchestrator | Monday 06 April 2026 02:27:23 +0000 (0:00:02.767) 0:00:25.474 ********** 2026-04-06 02:27:28.575359 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 02:27:28.575367 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 02:27:28.575374 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 02:27:28.575380 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 02:27:28.575387 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 02:27:28.575393 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 02:27:28.575403 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 02:27:28.575410 | orchestrator | 2026-04-06 02:27:28.575417 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-06 02:27:28.575424 | orchestrator | Monday 06 April 2026 02:27:25 +0000 (0:00:02.114) 0:00:27.589 ********** 2026-04-06 02:27:28.575431 | orchestrator | changed: [testbed-manager] 2026-04-06 02:27:28.575438 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:27:28.575444 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:27:28.575451 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:27:28.575457 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:27:28.575464 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:27:28.575471 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:27:28.575477 | orchestrator | 2026-04-06 02:27:28.575484 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-06 02:27:28.575497 | orchestrator | Monday 06 April 2026 02:27:27 +0000 (0:00:02.030) 0:00:29.620 ********** 2026-04-06 02:27:28.575505 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:28.575531 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:28.575540 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:28.575548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:28.575560 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:28.575572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:28.575587 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:28.575606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:28.575627 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:28.575649 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:34.624584 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:34.624661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:34.624668 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:34.624684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:34.624705 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:34.624711 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:34.624716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:27:34.624732 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:34.624738 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:34.624742 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:34.624747 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:34.624751 | orchestrator | 2026-04-06 02:27:34.624756 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-06 02:27:34.624762 | orchestrator | Monday 06 April 2026 02:27:28 +0000 (0:00:01.573) 0:00:31.194 ********** 2026-04-06 02:27:34.624820 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 02:27:34.624826 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 02:27:34.624836 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 02:27:34.624840 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 02:27:34.624845 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 02:27:34.624849 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 02:27:34.624853 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 02:27:34.624857 | orchestrator | 2026-04-06 02:27:34.624861 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-06 02:27:34.624865 | orchestrator | Monday 06 April 2026 02:27:30 +0000 (0:00:02.053) 0:00:33.247 ********** 2026-04-06 02:27:34.624870 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 02:27:34.624874 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 02:27:34.624878 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 02:27:34.624887 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 02:27:34.624891 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 02:27:34.624895 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 02:27:34.624899 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 02:27:34.624903 | orchestrator | 2026-04-06 02:27:34.624907 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-06 02:27:34.624911 | orchestrator | Monday 06 April 2026 02:27:32 +0000 (0:00:01.768) 0:00:35.015 ********** 2026-04-06 02:27:34.624915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:34.624924 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:35.189612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:35.189709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:35.189747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:35.189771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:35.189834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 02:27:35.189846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:35.189857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:35.189888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:35.189899 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:35.189918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:35.189933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:35.189944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:35.189955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:35.189967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:27:35.189985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:29:05.293494 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:29:05.293613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:29:05.293626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:29:05.293648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:29:05.293675 | orchestrator | 2026-04-06 02:29:05.293685 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-06 02:29:05.293696 | orchestrator | Monday 06 April 2026 02:27:35 +0000 (0:00:02.552) 0:00:37.568 ********** 2026-04-06 02:29:05.293720 | orchestrator | changed: [testbed-manager] 2026-04-06 02:29:05.293786 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:29:05.293798 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:29:05.293806 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:29:05.293815 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:29:05.293823 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:29:05.293831 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:29:05.293839 | orchestrator | 2026-04-06 02:29:05.293847 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-06 02:29:05.293855 | orchestrator | Monday 06 April 2026 02:27:36 +0000 (0:00:01.451) 0:00:39.020 ********** 2026-04-06 02:29:05.293863 | orchestrator | changed: [testbed-manager] 2026-04-06 02:29:05.293871 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:29:05.293879 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:29:05.293887 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:29:05.293895 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:29:05.293903 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:29:05.293911 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:29:05.293919 | orchestrator | 2026-04-06 02:29:05.293927 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 02:29:05.293935 | orchestrator | Monday 06 April 2026 02:27:37 +0000 (0:00:01.138) 0:00:40.159 ********** 2026-04-06 02:29:05.293943 | orchestrator | 2026-04-06 02:29:05.293951 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 02:29:05.293959 | orchestrator | Monday 06 April 2026 02:27:37 +0000 (0:00:00.069) 0:00:40.228 ********** 2026-04-06 02:29:05.293967 | orchestrator | 2026-04-06 02:29:05.293975 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 02:29:05.293983 | orchestrator | Monday 06 April 2026 02:27:37 +0000 (0:00:00.084) 0:00:40.313 ********** 2026-04-06 02:29:05.293991 | orchestrator | 2026-04-06 02:29:05.293999 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 02:29:05.294006 | orchestrator | Monday 06 April 2026 02:27:37 +0000 (0:00:00.071) 0:00:40.384 ********** 2026-04-06 02:29:05.294014 | orchestrator | 2026-04-06 02:29:05.294068 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 02:29:05.294089 | orchestrator | Monday 06 April 2026 02:27:38 +0000 (0:00:00.253) 0:00:40.638 ********** 2026-04-06 02:29:05.294099 | orchestrator | 2026-04-06 02:29:05.294108 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 02:29:05.294117 | orchestrator | Monday 06 April 2026 02:27:38 +0000 (0:00:00.066) 0:00:40.704 ********** 2026-04-06 02:29:05.294126 | orchestrator | 2026-04-06 02:29:05.294136 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 02:29:05.294145 | orchestrator | Monday 06 April 2026 02:27:38 +0000 (0:00:00.061) 0:00:40.766 ********** 2026-04-06 02:29:05.294155 | orchestrator | 2026-04-06 02:29:05.294164 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-06 02:29:05.294173 | orchestrator | Monday 06 April 2026 02:27:38 +0000 (0:00:00.094) 0:00:40.861 ********** 2026-04-06 02:29:05.294183 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:29:05.294192 | orchestrator | changed: [testbed-manager] 2026-04-06 02:29:05.294202 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:29:05.294212 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:29:05.294221 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:29:05.294246 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:29:05.294256 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:29:05.294265 | orchestrator | 2026-04-06 02:29:05.294275 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-06 02:29:05.294284 | orchestrator | Monday 06 April 2026 02:28:19 +0000 (0:00:41.065) 0:01:21.926 ********** 2026-04-06 02:29:05.294294 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:29:05.294303 | orchestrator | changed: [testbed-manager] 2026-04-06 02:29:05.294312 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:29:05.294322 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:29:05.294332 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:29:05.294342 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:29:05.294351 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:29:05.294360 | orchestrator | 2026-04-06 02:29:05.294370 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-06 02:29:05.294379 | orchestrator | Monday 06 April 2026 02:28:55 +0000 (0:00:36.262) 0:01:58.189 ********** 2026-04-06 02:29:05.294389 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:29:05.294403 | orchestrator | ok: [testbed-manager] 2026-04-06 02:29:05.294415 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:29:05.294426 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:29:05.294442 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:29:05.294460 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:29:05.294473 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:29:05.294485 | orchestrator | 2026-04-06 02:29:05.294497 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-06 02:29:05.294510 | orchestrator | Monday 06 April 2026 02:28:57 +0000 (0:00:01.909) 0:02:00.099 ********** 2026-04-06 02:29:05.294522 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:29:05.294534 | orchestrator | changed: [testbed-manager] 2026-04-06 02:29:05.294547 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:29:05.294560 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:29:05.294573 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:29:05.294587 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:29:05.294600 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:29:05.294613 | orchestrator | 2026-04-06 02:29:05.294626 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:29:05.294641 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 02:29:05.294655 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 02:29:05.294675 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 02:29:05.294693 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 02:29:05.294701 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 02:29:05.294709 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 02:29:05.294717 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 02:29:05.294725 | orchestrator | 2026-04-06 02:29:05.294733 | orchestrator | 2026-04-06 02:29:05.294886 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:29:05.294901 | orchestrator | Monday 06 April 2026 02:29:05 +0000 (0:00:07.540) 0:02:07.639 ********** 2026-04-06 02:29:05.294910 | orchestrator | =============================================================================== 2026-04-06 02:29:05.294918 | orchestrator | common : Restart fluentd container ------------------------------------- 41.07s 2026-04-06 02:29:05.294926 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.26s 2026-04-06 02:29:05.294934 | orchestrator | common : Restart cron container ----------------------------------------- 7.54s 2026-04-06 02:29:05.294942 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.73s 2026-04-06 02:29:05.294950 | orchestrator | common : Copying over config.json files for services -------------------- 3.56s 2026-04-06 02:29:05.294958 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.82s 2026-04-06 02:29:05.294966 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.77s 2026-04-06 02:29:05.294974 | orchestrator | common : Check common containers ---------------------------------------- 2.55s 2026-04-06 02:29:05.294982 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.11s 2026-04-06 02:29:05.294990 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.05s 2026-04-06 02:29:05.294998 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.03s 2026-04-06 02:29:05.295007 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.02s 2026-04-06 02:29:05.295021 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.91s 2026-04-06 02:29:05.295032 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.77s 2026-04-06 02:29:05.295052 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.57s 2026-04-06 02:29:05.295068 | orchestrator | common : include_tasks -------------------------------------------------- 1.52s 2026-04-06 02:29:05.295095 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.48s 2026-04-06 02:29:05.759599 | orchestrator | common : Creating log volume -------------------------------------------- 1.45s 2026-04-06 02:29:05.759732 | orchestrator | common : include_tasks -------------------------------------------------- 1.44s 2026-04-06 02:29:05.759865 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.16s 2026-04-06 02:29:08.388102 | orchestrator | 2026-04-06 02:29:08 | INFO  | Task 81646c34-aa72-4dfd-a25b-0b980b1833f1 (loadbalancer) was prepared for execution. 2026-04-06 02:29:08.388229 | orchestrator | 2026-04-06 02:29:08 | INFO  | It takes a moment until task 81646c34-aa72-4dfd-a25b-0b980b1833f1 (loadbalancer) has been started and output is visible here. 2026-04-06 02:29:23.883617 | orchestrator | 2026-04-06 02:29:23.883897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 02:29:23.883923 | orchestrator | 2026-04-06 02:29:23.883936 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 02:29:23.884011 | orchestrator | Monday 06 April 2026 02:29:13 +0000 (0:00:00.285) 0:00:00.285 ********** 2026-04-06 02:29:23.884026 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:29:23.884041 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:29:23.884052 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:29:23.884067 | orchestrator | 2026-04-06 02:29:23.884091 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 02:29:23.884119 | orchestrator | Monday 06 April 2026 02:29:13 +0000 (0:00:00.451) 0:00:00.736 ********** 2026-04-06 02:29:23.884139 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-06 02:29:23.884158 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-06 02:29:23.884176 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-06 02:29:23.884194 | orchestrator | 2026-04-06 02:29:23.884212 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-06 02:29:23.884231 | orchestrator | 2026-04-06 02:29:23.884249 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-06 02:29:23.884268 | orchestrator | Monday 06 April 2026 02:29:14 +0000 (0:00:00.553) 0:00:01.290 ********** 2026-04-06 02:29:23.884305 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:29:23.884324 | orchestrator | 2026-04-06 02:29:23.884341 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-06 02:29:23.884360 | orchestrator | Monday 06 April 2026 02:29:14 +0000 (0:00:00.716) 0:00:02.007 ********** 2026-04-06 02:29:23.884377 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:29:23.884393 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:29:23.884408 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:29:23.884426 | orchestrator | 2026-04-06 02:29:23.884443 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-06 02:29:23.884459 | orchestrator | Monday 06 April 2026 02:29:15 +0000 (0:00:00.593) 0:00:02.601 ********** 2026-04-06 02:29:23.884475 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:29:23.884492 | orchestrator | 2026-04-06 02:29:23.884504 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-06 02:29:23.884514 | orchestrator | Monday 06 April 2026 02:29:16 +0000 (0:00:00.763) 0:00:03.364 ********** 2026-04-06 02:29:23.884524 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:29:23.884533 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:29:23.884543 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:29:23.884552 | orchestrator | 2026-04-06 02:29:23.884562 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-06 02:29:23.884571 | orchestrator | Monday 06 April 2026 02:29:16 +0000 (0:00:00.618) 0:00:03.983 ********** 2026-04-06 02:29:23.884581 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-06 02:29:23.884591 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-06 02:29:23.884601 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-06 02:29:23.884610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-06 02:29:23.884619 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-06 02:29:23.884629 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-06 02:29:23.884639 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-06 02:29:23.884649 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-06 02:29:23.884659 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-06 02:29:23.884668 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-06 02:29:23.884693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-06 02:29:23.884702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-06 02:29:23.884712 | orchestrator | 2026-04-06 02:29:23.884722 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-06 02:29:23.884758 | orchestrator | Monday 06 April 2026 02:29:19 +0000 (0:00:02.451) 0:00:06.434 ********** 2026-04-06 02:29:23.884770 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-06 02:29:23.884781 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-06 02:29:23.884790 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-06 02:29:23.884800 | orchestrator | 2026-04-06 02:29:23.884810 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-06 02:29:23.884820 | orchestrator | Monday 06 April 2026 02:29:20 +0000 (0:00:00.845) 0:00:07.279 ********** 2026-04-06 02:29:23.884830 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-06 02:29:23.884840 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-06 02:29:23.884850 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-06 02:29:23.884859 | orchestrator | 2026-04-06 02:29:23.884869 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-06 02:29:23.884879 | orchestrator | Monday 06 April 2026 02:29:21 +0000 (0:00:01.311) 0:00:08.591 ********** 2026-04-06 02:29:23.884888 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-06 02:29:23.884898 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:29:23.884931 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-06 02:29:23.884941 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:29:23.884965 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-06 02:29:23.884975 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:29:23.884996 | orchestrator | 2026-04-06 02:29:23.885007 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-06 02:29:23.885017 | orchestrator | Monday 06 April 2026 02:29:22 +0000 (0:00:00.622) 0:00:09.213 ********** 2026-04-06 02:29:23.885029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:23.885057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:23.885078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:23.885106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:23.885124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:23.885150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:29.310994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:29:29.311107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:29:29.311121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:29:29.311131 | orchestrator | 2026-04-06 02:29:29.311140 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-06 02:29:29.311149 | orchestrator | Monday 06 April 2026 02:29:23 +0000 (0:00:01.864) 0:00:11.077 ********** 2026-04-06 02:29:29.311175 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:29:29.311185 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:29:29.311192 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:29:29.311200 | orchestrator | 2026-04-06 02:29:29.311208 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-06 02:29:29.311215 | orchestrator | Monday 06 April 2026 02:29:24 +0000 (0:00:00.904) 0:00:11.982 ********** 2026-04-06 02:29:29.311223 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-06 02:29:29.311231 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-06 02:29:29.311238 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-06 02:29:29.311256 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-06 02:29:29.311263 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-06 02:29:29.311278 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-06 02:29:29.311285 | orchestrator | 2026-04-06 02:29:29.311292 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-06 02:29:29.311300 | orchestrator | Monday 06 April 2026 02:29:26 +0000 (0:00:01.475) 0:00:13.457 ********** 2026-04-06 02:29:29.311307 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:29:29.311314 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:29:29.311322 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:29:29.311332 | orchestrator | 2026-04-06 02:29:29.311348 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-06 02:29:29.311364 | orchestrator | Monday 06 April 2026 02:29:27 +0000 (0:00:01.042) 0:00:14.500 ********** 2026-04-06 02:29:29.311375 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:29:29.311386 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:29:29.311398 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:29:29.311411 | orchestrator | 2026-04-06 02:29:29.311424 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-06 02:29:29.311436 | orchestrator | Monday 06 April 2026 02:29:28 +0000 (0:00:01.386) 0:00:15.886 ********** 2026-04-06 02:29:29.311448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 02:29:29.311475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:29.311485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:29.311496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 02:29:29.311514 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:29:29.311524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 02:29:29.311567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:29.311576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:29.311583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 02:29:29.311591 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:29:29.311604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 02:29:32.087122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:32.087222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:32.087237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 02:29:32.087248 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:29:32.087257 | orchestrator | 2026-04-06 02:29:32.087266 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-06 02:29:32.087276 | orchestrator | Monday 06 April 2026 02:29:29 +0000 (0:00:00.619) 0:00:16.506 ********** 2026-04-06 02:29:32.087284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:32.087295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:32.087304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:32.087402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:32.087411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:32.087417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 02:29:32.087422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:32.087428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:32.087444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:32.087478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 02:29:40.969414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:40.969508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99', '__omit_place_holder__d352c5baeb3dcf7b10cda009042f0f32c2f78b99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 02:29:40.969519 | orchestrator | 2026-04-06 02:29:40.969527 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-06 02:29:40.969535 | orchestrator | Monday 06 April 2026 02:29:32 +0000 (0:00:02.782) 0:00:19.288 ********** 2026-04-06 02:29:40.969541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:40.969550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:40.969556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:40.969583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:40.969617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:40.969625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:40.969632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:29:40.969639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:29:40.969645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:29:40.969652 | orchestrator | 2026-04-06 02:29:40.969658 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-06 02:29:40.969664 | orchestrator | Monday 06 April 2026 02:29:35 +0000 (0:00:03.212) 0:00:22.501 ********** 2026-04-06 02:29:40.969677 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-06 02:29:40.969685 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-06 02:29:40.969691 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-06 02:29:40.969698 | orchestrator | 2026-04-06 02:29:40.969704 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-06 02:29:40.969710 | orchestrator | Monday 06 April 2026 02:29:37 +0000 (0:00:01.885) 0:00:24.387 ********** 2026-04-06 02:29:40.969716 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-06 02:29:40.969723 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-06 02:29:40.969821 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-06 02:29:40.969829 | orchestrator | 2026-04-06 02:29:40.969835 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-06 02:29:40.969842 | orchestrator | Monday 06 April 2026 02:29:40 +0000 (0:00:03.108) 0:00:27.496 ********** 2026-04-06 02:29:40.969848 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:29:40.969856 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:29:40.969862 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:29:40.969869 | orchestrator | 2026-04-06 02:29:40.969881 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-06 02:29:52.720456 | orchestrator | Monday 06 April 2026 02:29:40 +0000 (0:00:00.671) 0:00:28.167 ********** 2026-04-06 02:29:52.720596 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-06 02:29:52.720627 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-06 02:29:52.720639 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-06 02:29:52.720649 | orchestrator | 2026-04-06 02:29:52.720660 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-06 02:29:52.720670 | orchestrator | Monday 06 April 2026 02:29:43 +0000 (0:00:02.116) 0:00:30.284 ********** 2026-04-06 02:29:52.720681 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-06 02:29:52.720692 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-06 02:29:52.720702 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-06 02:29:52.720711 | orchestrator | 2026-04-06 02:29:52.720722 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-06 02:29:52.720775 | orchestrator | Monday 06 April 2026 02:29:45 +0000 (0:00:02.243) 0:00:32.527 ********** 2026-04-06 02:29:52.720786 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-06 02:29:52.720797 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-06 02:29:52.720807 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-06 02:29:52.720816 | orchestrator | 2026-04-06 02:29:52.720838 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-06 02:29:52.720849 | orchestrator | Monday 06 April 2026 02:29:46 +0000 (0:00:01.411) 0:00:33.938 ********** 2026-04-06 02:29:52.720859 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-06 02:29:52.720869 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-06 02:29:52.720879 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-06 02:29:52.720888 | orchestrator | 2026-04-06 02:29:52.720922 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-06 02:29:52.720933 | orchestrator | Monday 06 April 2026 02:29:48 +0000 (0:00:01.438) 0:00:35.377 ********** 2026-04-06 02:29:52.720943 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:29:52.720953 | orchestrator | 2026-04-06 02:29:52.720962 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-06 02:29:52.720972 | orchestrator | Monday 06 April 2026 02:29:48 +0000 (0:00:00.596) 0:00:35.973 ********** 2026-04-06 02:29:52.720986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:52.721001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:52.721025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 02:29:52.721075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:52.721096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:52.721114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:29:52.721143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:29:52.721162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:29:52.721179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:29:52.721197 | orchestrator | 2026-04-06 02:29:52.721214 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-06 02:29:52.721230 | orchestrator | Monday 06 April 2026 02:29:52 +0000 (0:00:03.306) 0:00:39.280 ********** 2026-04-06 02:29:52.721273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 02:29:53.545019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:53.545142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:53.545187 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:29:53.545202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 02:29:53.545215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:53.545227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:53.545239 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:29:53.545250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 02:29:53.545309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:53.545329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:53.545363 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:29:53.545383 | orchestrator | 2026-04-06 02:29:53.545403 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-06 02:29:53.545417 | orchestrator | Monday 06 April 2026 02:29:52 +0000 (0:00:00.641) 0:00:39.921 ********** 2026-04-06 02:29:53.545430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 02:29:53.545442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:53.545453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:53.545466 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:29:53.545486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 02:29:53.545523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:54.403848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:54.403987 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:29:54.404006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 02:29:54.404021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:54.404034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:54.404045 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:29:54.404057 | orchestrator | 2026-04-06 02:29:54.404069 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-06 02:29:54.404082 | orchestrator | Monday 06 April 2026 02:29:53 +0000 (0:00:00.818) 0:00:40.740 ********** 2026-04-06 02:29:54.404093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 02:29:54.404105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:54.404138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:54.404157 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:29:54.404169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 02:29:54.404182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:54.404193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:54.404204 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:29:54.404216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 02:29:54.404247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:54.404266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:54.404295 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:29:55.812901 | orchestrator | 2026-04-06 02:29:55.813010 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-06 02:29:55.813027 | orchestrator | Monday 06 April 2026 02:29:54 +0000 (0:00:00.848) 0:00:41.588 ********** 2026-04-06 02:29:55.813044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 02:29:55.813061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:55.813074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:55.813087 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:29:55.813100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 02:29:55.813113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:55.813151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:55.813186 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:29:55.813220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 02:29:55.813233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:55.813244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:55.813256 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:29:55.813267 | orchestrator | 2026-04-06 02:29:55.813279 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-06 02:29:55.813290 | orchestrator | Monday 06 April 2026 02:29:54 +0000 (0:00:00.601) 0:00:42.190 ********** 2026-04-06 02:29:55.813301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 02:29:55.813316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:55.813362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:55.813388 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:29:55.813439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 02:29:56.910628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:56.910780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:56.910805 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:29:56.910824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 02:29:56.910839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:56.910855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:56.910900 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:29:56.910916 | orchestrator | 2026-04-06 02:29:56.910930 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-06 02:29:56.910945 | orchestrator | Monday 06 April 2026 02:29:55 +0000 (0:00:00.818) 0:00:43.008 ********** 2026-04-06 02:29:56.910975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 02:29:56.911017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:56.911031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:56.911045 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:29:56.911059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 02:29:56.911073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:56.911110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:56.911125 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:29:56.911146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 02:29:56.911172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:58.323335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:58.323426 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:29:58.323438 | orchestrator | 2026-04-06 02:29:58.323446 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-06 02:29:58.323454 | orchestrator | Monday 06 April 2026 02:29:56 +0000 (0:00:01.094) 0:00:44.103 ********** 2026-04-06 02:29:58.323463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 02:29:58.323471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:58.323496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:58.323504 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:29:58.323511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 02:29:58.323529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:58.323551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:58.323558 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:29:58.323565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 02:29:58.323572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:58.323584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:29:58.323591 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:29:58.323597 | orchestrator | 2026-04-06 02:29:58.323604 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-06 02:29:58.323610 | orchestrator | Monday 06 April 2026 02:29:57 +0000 (0:00:00.600) 0:00:44.703 ********** 2026-04-06 02:29:58.323617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 02:29:58.323624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:29:58.323644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:30:05.184665 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:05.184820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 02:30:05.184835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:30:05.184875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:30:05.184883 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:05.184889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 02:30:05.184896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 02:30:05.184916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 02:30:05.184922 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:05.184929 | orchestrator | 2026-04-06 02:30:05.184937 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-06 02:30:05.184944 | orchestrator | Monday 06 April 2026 02:29:58 +0000 (0:00:00.809) 0:00:45.513 ********** 2026-04-06 02:30:05.184950 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-06 02:30:05.184973 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-06 02:30:05.184980 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-06 02:30:05.184987 | orchestrator | 2026-04-06 02:30:05.184994 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-06 02:30:05.185000 | orchestrator | Monday 06 April 2026 02:30:00 +0000 (0:00:01.712) 0:00:47.225 ********** 2026-04-06 02:30:05.185007 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-06 02:30:05.185014 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-06 02:30:05.185020 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-06 02:30:05.185027 | orchestrator | 2026-04-06 02:30:05.185041 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-06 02:30:05.185048 | orchestrator | Monday 06 April 2026 02:30:01 +0000 (0:00:01.730) 0:00:48.955 ********** 2026-04-06 02:30:05.185054 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 02:30:05.185062 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 02:30:05.185068 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 02:30:05.185075 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 02:30:05.185082 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:05.185089 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 02:30:05.185106 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:05.185113 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 02:30:05.185120 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:05.185127 | orchestrator | 2026-04-06 02:30:05.185133 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-06 02:30:05.185140 | orchestrator | Monday 06 April 2026 02:30:02 +0000 (0:00:00.890) 0:00:49.845 ********** 2026-04-06 02:30:05.185148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 02:30:05.185156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 02:30:05.185166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 02:30:05.185181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:30:09.624141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:30:09.624285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 02:30:09.624313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:30:09.624335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:30:09.624371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 02:30:09.624391 | orchestrator | 2026-04-06 02:30:09.624413 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-06 02:30:09.624456 | orchestrator | Monday 06 April 2026 02:30:05 +0000 (0:00:02.533) 0:00:52.379 ********** 2026-04-06 02:30:09.624476 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:30:09.624496 | orchestrator | 2026-04-06 02:30:09.624515 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-06 02:30:09.624534 | orchestrator | Monday 06 April 2026 02:30:06 +0000 (0:00:00.880) 0:00:53.259 ********** 2026-04-06 02:30:09.624584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 02:30:09.624639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 02:30:09.624663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:09.624684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 02:30:09.624704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 02:30:09.624765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 02:30:09.624814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 02:30:10.311528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 02:30:10.311673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:10.311696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:10.311711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 02:30:10.311873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 02:30:10.311893 | orchestrator | 2026-04-06 02:30:10.311909 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-06 02:30:10.311920 | orchestrator | Monday 06 April 2026 02:30:09 +0000 (0:00:03.557) 0:00:56.817 ********** 2026-04-06 02:30:10.311951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 02:30:10.311979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 02:30:10.311989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:10.311998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 02:30:10.312006 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:10.312016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 02:30:10.312029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 02:30:10.312044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:10.312061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 02:30:19.760153 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:19.760270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 02:30:19.760289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 02:30:19.760303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:19.760315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 02:30:19.760350 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:19.760363 | orchestrator | 2026-04-06 02:30:19.760375 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-06 02:30:19.760387 | orchestrator | Monday 06 April 2026 02:30:10 +0000 (0:00:00.694) 0:00:57.511 ********** 2026-04-06 02:30:19.760400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-06 02:30:19.760414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-06 02:30:19.760427 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:19.760454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-06 02:30:19.760466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-06 02:30:19.760477 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:19.760488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-06 02:30:19.760519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-06 02:30:19.760531 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:19.760543 | orchestrator | 2026-04-06 02:30:19.760554 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-06 02:30:19.760565 | orchestrator | Monday 06 April 2026 02:30:11 +0000 (0:00:01.257) 0:00:58.769 ********** 2026-04-06 02:30:19.760576 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:30:19.760587 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:30:19.760598 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:30:19.760609 | orchestrator | 2026-04-06 02:30:19.760621 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-06 02:30:19.760632 | orchestrator | Monday 06 April 2026 02:30:12 +0000 (0:00:01.324) 0:01:00.094 ********** 2026-04-06 02:30:19.760643 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:30:19.760654 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:30:19.760665 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:30:19.760678 | orchestrator | 2026-04-06 02:30:19.760691 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-06 02:30:19.760704 | orchestrator | Monday 06 April 2026 02:30:15 +0000 (0:00:02.146) 0:01:02.240 ********** 2026-04-06 02:30:19.760740 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:30:19.760755 | orchestrator | 2026-04-06 02:30:19.760782 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-06 02:30:19.760806 | orchestrator | Monday 06 April 2026 02:30:15 +0000 (0:00:00.684) 0:01:02.924 ********** 2026-04-06 02:30:19.760821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 02:30:19.760855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 02:30:19.760870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:19.760893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:20.503126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:20.503221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:20.503252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 02:30:20.503270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:20.503276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:20.503282 | orchestrator | 2026-04-06 02:30:20.503288 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-06 02:30:20.503294 | orchestrator | Monday 06 April 2026 02:30:19 +0000 (0:00:04.028) 0:01:06.953 ********** 2026-04-06 02:30:20.503327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 02:30:20.503339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:20.503349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:20.503354 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:20.503364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 02:30:20.503369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:20.503373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:20.503378 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:20.503388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 02:30:30.478958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 02:30:30.479072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:30.479089 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:30.479103 | orchestrator | 2026-04-06 02:30:30.479115 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-06 02:30:30.479129 | orchestrator | Monday 06 April 2026 02:30:20 +0000 (0:00:00.749) 0:01:07.702 ********** 2026-04-06 02:30:30.479159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-06 02:30:30.479188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-06 02:30:30.479200 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:30.479210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-06 02:30:30.479221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-06 02:30:30.479232 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:30.479243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-06 02:30:30.479254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-06 02:30:30.479264 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:30.479275 | orchestrator | 2026-04-06 02:30:30.479285 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-06 02:30:30.479296 | orchestrator | Monday 06 April 2026 02:30:21 +0000 (0:00:00.934) 0:01:08.637 ********** 2026-04-06 02:30:30.479306 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:30:30.479316 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:30:30.479327 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:30:30.479338 | orchestrator | 2026-04-06 02:30:30.479348 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-06 02:30:30.479359 | orchestrator | Monday 06 April 2026 02:30:22 +0000 (0:00:01.558) 0:01:10.196 ********** 2026-04-06 02:30:30.479393 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:30:30.479404 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:30:30.479427 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:30:30.479438 | orchestrator | 2026-04-06 02:30:30.479449 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-06 02:30:30.479459 | orchestrator | Monday 06 April 2026 02:30:25 +0000 (0:00:02.095) 0:01:12.291 ********** 2026-04-06 02:30:30.479470 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:30.479481 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:30.479491 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:30.479501 | orchestrator | 2026-04-06 02:30:30.479513 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-06 02:30:30.479524 | orchestrator | Monday 06 April 2026 02:30:25 +0000 (0:00:00.340) 0:01:12.632 ********** 2026-04-06 02:30:30.479534 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:30:30.479544 | orchestrator | 2026-04-06 02:30:30.479555 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-06 02:30:30.479583 | orchestrator | Monday 06 April 2026 02:30:26 +0000 (0:00:00.712) 0:01:13.344 ********** 2026-04-06 02:30:30.479597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-06 02:30:30.479614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-06 02:30:30.479626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-06 02:30:30.479637 | orchestrator | 2026-04-06 02:30:30.479647 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-06 02:30:30.479658 | orchestrator | Monday 06 April 2026 02:30:29 +0000 (0:00:02.900) 0:01:16.244 ********** 2026-04-06 02:30:30.479676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-06 02:30:30.479688 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:30.479813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-06 02:30:38.499203 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:38.499295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-06 02:30:38.499308 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:38.499316 | orchestrator | 2026-04-06 02:30:38.499324 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-06 02:30:38.499333 | orchestrator | Monday 06 April 2026 02:30:30 +0000 (0:00:01.435) 0:01:17.680 ********** 2026-04-06 02:30:38.499355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-06 02:30:38.499365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-06 02:30:38.499374 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:38.499400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-06 02:30:38.499408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-06 02:30:38.499415 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:38.499422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-06 02:30:38.499430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-06 02:30:38.499437 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:38.499443 | orchestrator | 2026-04-06 02:30:38.499450 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-06 02:30:38.499457 | orchestrator | Monday 06 April 2026 02:30:32 +0000 (0:00:01.753) 0:01:19.434 ********** 2026-04-06 02:30:38.499464 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:38.499471 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:38.499478 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:38.499485 | orchestrator | 2026-04-06 02:30:38.499495 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-06 02:30:38.499514 | orchestrator | Monday 06 April 2026 02:30:32 +0000 (0:00:00.501) 0:01:19.936 ********** 2026-04-06 02:30:38.499521 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:38.499528 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:38.499535 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:38.499542 | orchestrator | 2026-04-06 02:30:38.499549 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-06 02:30:38.499555 | orchestrator | Monday 06 April 2026 02:30:34 +0000 (0:00:01.389) 0:01:21.326 ********** 2026-04-06 02:30:38.499562 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:30:38.499569 | orchestrator | 2026-04-06 02:30:38.499576 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-06 02:30:38.499583 | orchestrator | Monday 06 April 2026 02:30:35 +0000 (0:00:00.977) 0:01:22.303 ********** 2026-04-06 02:30:38.499594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 02:30:38.499610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:30:38.499620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 02:30:38.499628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 02:30:38.499641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 02:30:39.244061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:30:39.244251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 02:30:39.244317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 02:30:39.244340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 02:30:39.244360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:30:39.244409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 02:30:39.244432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 02:30:39.244455 | orchestrator | 2026-04-06 02:30:39.244475 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-06 02:30:39.244488 | orchestrator | Monday 06 April 2026 02:30:38 +0000 (0:00:03.481) 0:01:25.784 ********** 2026-04-06 02:30:39.244502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 02:30:39.244514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:30:39.244526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 02:30:39.244538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 02:30:39.244549 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:39.244577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 02:30:46.074535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:30:46.074638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 02:30:46.074650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 02:30:46.074659 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:46.074671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 02:30:46.074684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:30:46.074823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 02:30:46.074837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 02:30:46.074845 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:46.074853 | orchestrator | 2026-04-06 02:30:46.074861 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-06 02:30:46.074871 | orchestrator | Monday 06 April 2026 02:30:39 +0000 (0:00:00.765) 0:01:26.550 ********** 2026-04-06 02:30:46.074879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-06 02:30:46.074888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-06 02:30:46.074897 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:46.074905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-06 02:30:46.074913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-06 02:30:46.074920 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:46.074928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-06 02:30:46.074935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-06 02:30:46.074943 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:46.074950 | orchestrator | 2026-04-06 02:30:46.074957 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-06 02:30:46.074965 | orchestrator | Monday 06 April 2026 02:30:40 +0000 (0:00:01.367) 0:01:27.917 ********** 2026-04-06 02:30:46.074972 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:30:46.074987 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:30:46.074995 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:30:46.075002 | orchestrator | 2026-04-06 02:30:46.075009 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-06 02:30:46.075017 | orchestrator | Monday 06 April 2026 02:30:42 +0000 (0:00:01.409) 0:01:29.327 ********** 2026-04-06 02:30:46.075024 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:30:46.075034 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:30:46.075043 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:30:46.075052 | orchestrator | 2026-04-06 02:30:46.075060 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-06 02:30:46.075070 | orchestrator | Monday 06 April 2026 02:30:44 +0000 (0:00:02.132) 0:01:31.459 ********** 2026-04-06 02:30:46.075078 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:46.075087 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:46.075095 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:46.075104 | orchestrator | 2026-04-06 02:30:46.075113 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-06 02:30:46.075122 | orchestrator | Monday 06 April 2026 02:30:44 +0000 (0:00:00.343) 0:01:31.802 ********** 2026-04-06 02:30:46.075130 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:46.075139 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:46.075148 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:30:46.075157 | orchestrator | 2026-04-06 02:30:46.075165 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-06 02:30:46.075174 | orchestrator | Monday 06 April 2026 02:30:44 +0000 (0:00:00.332) 0:01:32.135 ********** 2026-04-06 02:30:46.075183 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:30:46.075192 | orchestrator | 2026-04-06 02:30:46.075200 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-06 02:30:46.075210 | orchestrator | Monday 06 April 2026 02:30:46 +0000 (0:00:01.137) 0:01:33.272 ********** 2026-04-06 02:30:49.617882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 02:30:49.617983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 02:30:49.617998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 02:30:49.618072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 02:30:49.618081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 02:30:49.618090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:49.618120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 02:30:49.618128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 02:30:49.618136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 02:30:49.618176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 02:30:49.618185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 02:30:49.618192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 02:30:49.618210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:50.710503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 02:30:50.710619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 02:30:50.710676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 02:30:50.710701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 02:30:50.710859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 02:30:50.710893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 02:30:50.710927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:50.710940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 02:30:50.710964 | orchestrator | 2026-04-06 02:30:50.710978 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-06 02:30:50.710990 | orchestrator | Monday 06 April 2026 02:30:50 +0000 (0:00:03.995) 0:01:37.267 ********** 2026-04-06 02:30:50.711005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 02:30:50.711020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 02:30:50.711034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 02:30:50.711047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 02:30:50.711068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 02:30:51.267605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 02:30:51.267746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 02:30:51.267761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:51.267769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 02:30:51.268143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 02:30:51.268168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 02:30:51.268175 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:30:51.268217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 02:30:51.268251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:30:51.268261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 02:30:51.268268 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:30:51.268276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 02:30:51.268284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 02:30:51.268291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 02:30:51.268310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 02:31:01.823822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 02:31:01.823982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:31:01.824006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 02:31:01.824024 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:01.824042 | orchestrator | 2026-04-06 02:31:01.824060 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-06 02:31:01.824079 | orchestrator | Monday 06 April 2026 02:30:51 +0000 (0:00:01.200) 0:01:38.468 ********** 2026-04-06 02:31:01.824097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-06 02:31:01.824113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-06 02:31:01.824124 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:01.824134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-06 02:31:01.824144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-06 02:31:01.824154 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:01.824164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-06 02:31:01.824198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-06 02:31:01.824212 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:01.824232 | orchestrator | 2026-04-06 02:31:01.824257 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-06 02:31:01.824273 | orchestrator | Monday 06 April 2026 02:30:52 +0000 (0:00:01.421) 0:01:39.889 ********** 2026-04-06 02:31:01.824289 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:01.824308 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:01.824325 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:01.824341 | orchestrator | 2026-04-06 02:31:01.824356 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-06 02:31:01.824368 | orchestrator | Monday 06 April 2026 02:30:53 +0000 (0:00:01.289) 0:01:41.179 ********** 2026-04-06 02:31:01.824380 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:01.824391 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:01.824402 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:01.824413 | orchestrator | 2026-04-06 02:31:01.824424 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-06 02:31:01.824436 | orchestrator | Monday 06 April 2026 02:30:56 +0000 (0:00:02.095) 0:01:43.275 ********** 2026-04-06 02:31:01.824470 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:01.824487 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:01.824503 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:01.824518 | orchestrator | 2026-04-06 02:31:01.824534 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-06 02:31:01.824550 | orchestrator | Monday 06 April 2026 02:30:56 +0000 (0:00:00.396) 0:01:43.671 ********** 2026-04-06 02:31:01.824567 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:31:01.824582 | orchestrator | 2026-04-06 02:31:01.824598 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-06 02:31:01.824616 | orchestrator | Monday 06 April 2026 02:30:57 +0000 (0:00:01.141) 0:01:44.813 ********** 2026-04-06 02:31:01.824648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 02:31:01.824671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 02:31:01.824777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 02:31:05.074896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 02:31:05.075127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 02:31:05.076114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 02:31:05.076197 | orchestrator | 2026-04-06 02:31:05.076211 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-06 02:31:05.076222 | orchestrator | Monday 06 April 2026 02:31:01 +0000 (0:00:04.324) 0:01:49.138 ********** 2026-04-06 02:31:05.076245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 02:31:05.076266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 02:31:09.483343 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:09.483536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 02:31:09.483592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 02:31:09.483634 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:09.483672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 02:31:09.483692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 02:31:09.483740 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:09.483752 | orchestrator | 2026-04-06 02:31:09.483764 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-06 02:31:09.483777 | orchestrator | Monday 06 April 2026 02:31:05 +0000 (0:00:03.250) 0:01:52.388 ********** 2026-04-06 02:31:09.483789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 02:31:09.483811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 02:31:18.664341 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:18.664420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 02:31:18.664429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 02:31:18.664435 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:18.664439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 02:31:18.664456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 02:31:18.664460 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:18.664464 | orchestrator | 2026-04-06 02:31:18.664469 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-06 02:31:18.664503 | orchestrator | Monday 06 April 2026 02:31:09 +0000 (0:00:04.294) 0:01:56.682 ********** 2026-04-06 02:31:18.664507 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:18.664511 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:18.664515 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:18.664519 | orchestrator | 2026-04-06 02:31:18.664523 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-06 02:31:18.664526 | orchestrator | Monday 06 April 2026 02:31:10 +0000 (0:00:01.396) 0:01:58.079 ********** 2026-04-06 02:31:18.664530 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:18.664534 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:18.664538 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:18.664542 | orchestrator | 2026-04-06 02:31:18.664545 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-06 02:31:18.664549 | orchestrator | Monday 06 April 2026 02:31:13 +0000 (0:00:02.244) 0:02:00.324 ********** 2026-04-06 02:31:18.664553 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:18.664557 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:18.664560 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:18.664564 | orchestrator | 2026-04-06 02:31:18.664568 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-06 02:31:18.664572 | orchestrator | Monday 06 April 2026 02:31:13 +0000 (0:00:00.327) 0:02:00.652 ********** 2026-04-06 02:31:18.664576 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:31:18.664579 | orchestrator | 2026-04-06 02:31:18.664583 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-06 02:31:18.664587 | orchestrator | Monday 06 April 2026 02:31:14 +0000 (0:00:01.325) 0:02:01.977 ********** 2026-04-06 02:31:18.664600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 02:31:18.664606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 02:31:18.664611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 02:31:18.664615 | orchestrator | 2026-04-06 02:31:18.664619 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-06 02:31:18.664628 | orchestrator | Monday 06 April 2026 02:31:18 +0000 (0:00:03.253) 0:02:05.231 ********** 2026-04-06 02:31:18.664632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-06 02:31:18.664637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-06 02:31:18.664641 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:18.664645 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:18.664691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-06 02:31:18.664699 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:18.664736 | orchestrator | 2026-04-06 02:31:18.664741 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-06 02:31:18.664745 | orchestrator | Monday 06 April 2026 02:31:18 +0000 (0:00:00.399) 0:02:05.631 ********** 2026-04-06 02:31:18.664749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-06 02:31:18.664760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-06 02:31:27.801934 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:27.802123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-06 02:31:27.802156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-06 02:31:27.802174 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:27.802193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-06 02:31:27.802211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-06 02:31:27.802262 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:27.802280 | orchestrator | 2026-04-06 02:31:27.802299 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-06 02:31:27.802318 | orchestrator | Monday 06 April 2026 02:31:19 +0000 (0:00:00.995) 0:02:06.626 ********** 2026-04-06 02:31:27.802334 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:27.802350 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:27.802360 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:27.802370 | orchestrator | 2026-04-06 02:31:27.802380 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-06 02:31:27.802389 | orchestrator | Monday 06 April 2026 02:31:20 +0000 (0:00:01.311) 0:02:07.937 ********** 2026-04-06 02:31:27.802399 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:27.802409 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:27.802419 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:27.802428 | orchestrator | 2026-04-06 02:31:27.802438 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-06 02:31:27.802463 | orchestrator | Monday 06 April 2026 02:31:22 +0000 (0:00:02.122) 0:02:10.060 ********** 2026-04-06 02:31:27.802475 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:27.802486 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:27.802498 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:27.802509 | orchestrator | 2026-04-06 02:31:27.802521 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-06 02:31:27.802532 | orchestrator | Monday 06 April 2026 02:31:23 +0000 (0:00:00.336) 0:02:10.396 ********** 2026-04-06 02:31:27.802544 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:31:27.802555 | orchestrator | 2026-04-06 02:31:27.802566 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-06 02:31:27.802577 | orchestrator | Monday 06 April 2026 02:31:24 +0000 (0:00:01.228) 0:02:11.624 ********** 2026-04-06 02:31:27.802618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 02:31:27.802655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 02:31:27.802679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 02:31:29.455511 | orchestrator | 2026-04-06 02:31:29.455613 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-06 02:31:29.455631 | orchestrator | Monday 06 April 2026 02:31:27 +0000 (0:00:03.372) 0:02:14.997 ********** 2026-04-06 02:31:29.455685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 02:31:29.455737 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:29.455775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 02:31:29.455815 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:29.455837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 02:31:29.455851 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:29.455861 | orchestrator | 2026-04-06 02:31:29.455869 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-06 02:31:29.455877 | orchestrator | Monday 06 April 2026 02:31:28 +0000 (0:00:00.663) 0:02:15.661 ********** 2026-04-06 02:31:29.455885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-06 02:31:29.455901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 02:31:29.455913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-06 02:31:29.455929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 02:31:38.768656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-06 02:31:38.768830 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:38.768847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-06 02:31:38.768863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 02:31:38.768892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-06 02:31:38.768904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 02:31:38.768916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-06 02:31:38.768926 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:38.768937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-06 02:31:38.768948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 02:31:38.768958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-06 02:31:38.768990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 02:31:38.769001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-06 02:31:38.769011 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:38.769021 | orchestrator | 2026-04-06 02:31:38.769032 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-06 02:31:38.769044 | orchestrator | Monday 06 April 2026 02:31:29 +0000 (0:00:00.993) 0:02:16.654 ********** 2026-04-06 02:31:38.769054 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:38.769064 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:38.769073 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:38.769083 | orchestrator | 2026-04-06 02:31:38.769093 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-06 02:31:38.769103 | orchestrator | Monday 06 April 2026 02:31:31 +0000 (0:00:01.588) 0:02:18.242 ********** 2026-04-06 02:31:38.769113 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:38.769124 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:38.769133 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:38.769143 | orchestrator | 2026-04-06 02:31:38.769153 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-06 02:31:38.769163 | orchestrator | Monday 06 April 2026 02:31:33 +0000 (0:00:02.092) 0:02:20.335 ********** 2026-04-06 02:31:38.769173 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:38.769183 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:38.769212 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:38.769225 | orchestrator | 2026-04-06 02:31:38.769236 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-06 02:31:38.769248 | orchestrator | Monday 06 April 2026 02:31:33 +0000 (0:00:00.335) 0:02:20.670 ********** 2026-04-06 02:31:38.769260 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:38.769271 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:38.769283 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:38.769294 | orchestrator | 2026-04-06 02:31:38.769306 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-06 02:31:38.769317 | orchestrator | Monday 06 April 2026 02:31:33 +0000 (0:00:00.339) 0:02:21.009 ********** 2026-04-06 02:31:38.769329 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:31:38.769340 | orchestrator | 2026-04-06 02:31:38.769352 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-06 02:31:38.769362 | orchestrator | Monday 06 April 2026 02:31:35 +0000 (0:00:01.299) 0:02:22.309 ********** 2026-04-06 02:31:38.769383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 02:31:38.769407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 02:31:38.769419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 02:31:38.769431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 02:31:38.769449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 02:31:39.422458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 02:31:39.422567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 02:31:39.422612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 02:31:39.422626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 02:31:39.422639 | orchestrator | 2026-04-06 02:31:39.422652 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-06 02:31:39.422665 | orchestrator | Monday 06 April 2026 02:31:38 +0000 (0:00:03.657) 0:02:25.967 ********** 2026-04-06 02:31:39.422697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 02:31:39.422836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 02:31:39.422864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 02:31:39.422901 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:39.422923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 02:31:39.422943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 02:31:39.422963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 02:31:39.423005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 02:31:49.352982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 02:31:49.353107 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:49.353126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 02:31:49.353135 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:49.353143 | orchestrator | 2026-04-06 02:31:49.353151 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-06 02:31:49.353161 | orchestrator | Monday 06 April 2026 02:31:39 +0000 (0:00:00.653) 0:02:26.620 ********** 2026-04-06 02:31:49.353171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-06 02:31:49.353182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-06 02:31:49.353191 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:49.353199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-06 02:31:49.353207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-06 02:31:49.353215 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:49.353223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-06 02:31:49.353231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-06 02:31:49.353239 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:49.353246 | orchestrator | 2026-04-06 02:31:49.353255 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-06 02:31:49.353261 | orchestrator | Monday 06 April 2026 02:31:40 +0000 (0:00:01.201) 0:02:27.822 ********** 2026-04-06 02:31:49.353266 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:49.353271 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:49.353301 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:49.353309 | orchestrator | 2026-04-06 02:31:49.353316 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-06 02:31:49.353322 | orchestrator | Monday 06 April 2026 02:31:41 +0000 (0:00:01.351) 0:02:29.174 ********** 2026-04-06 02:31:49.353330 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:49.353337 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:49.353345 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:49.353352 | orchestrator | 2026-04-06 02:31:49.353360 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-06 02:31:49.353368 | orchestrator | Monday 06 April 2026 02:31:44 +0000 (0:00:02.197) 0:02:31.371 ********** 2026-04-06 02:31:49.353375 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:49.353397 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:49.353403 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:49.353407 | orchestrator | 2026-04-06 02:31:49.353426 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-06 02:31:49.353431 | orchestrator | Monday 06 April 2026 02:31:44 +0000 (0:00:00.323) 0:02:31.695 ********** 2026-04-06 02:31:49.353435 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:31:49.353440 | orchestrator | 2026-04-06 02:31:49.353445 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-06 02:31:49.353449 | orchestrator | Monday 06 April 2026 02:31:45 +0000 (0:00:01.336) 0:02:33.032 ********** 2026-04-06 02:31:49.353455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 02:31:49.353464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 02:31:49.353470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:31:49.353481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:31:49.353491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 02:31:54.863081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:31:54.863175 | orchestrator | 2026-04-06 02:31:54.863185 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-06 02:31:54.863194 | orchestrator | Monday 06 April 2026 02:31:49 +0000 (0:00:03.518) 0:02:36.550 ********** 2026-04-06 02:31:54.863203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 02:31:54.863248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:31:54.863277 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:54.863289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 02:31:54.863311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:31:54.863317 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:54.863324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 02:31:54.863331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:31:54.863345 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:54.863355 | orchestrator | 2026-04-06 02:31:54.863366 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-06 02:31:54.863377 | orchestrator | Monday 06 April 2026 02:31:50 +0000 (0:00:00.732) 0:02:37.283 ********** 2026-04-06 02:31:54.863387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-06 02:31:54.863399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-06 02:31:54.863417 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:31:54.863430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-06 02:31:54.863440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-06 02:31:54.863450 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:31:54.863459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-06 02:31:54.863469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-06 02:31:54.863480 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:31:54.863489 | orchestrator | 2026-04-06 02:31:54.863505 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-06 02:31:54.863516 | orchestrator | Monday 06 April 2026 02:31:51 +0000 (0:00:00.975) 0:02:38.258 ********** 2026-04-06 02:31:54.863527 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:54.863536 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:54.863546 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:54.863556 | orchestrator | 2026-04-06 02:31:54.863565 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-06 02:31:54.863575 | orchestrator | Monday 06 April 2026 02:31:52 +0000 (0:00:01.698) 0:02:39.957 ********** 2026-04-06 02:31:54.863583 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:31:54.863595 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:31:54.863606 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:31:54.863617 | orchestrator | 2026-04-06 02:31:54.863629 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-06 02:31:54.863648 | orchestrator | Monday 06 April 2026 02:31:54 +0000 (0:00:02.098) 0:02:42.055 ********** 2026-04-06 02:31:59.660408 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:31:59.660530 | orchestrator | 2026-04-06 02:31:59.660542 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-06 02:31:59.660549 | orchestrator | Monday 06 April 2026 02:31:55 +0000 (0:00:01.139) 0:02:43.194 ********** 2026-04-06 02:31:59.660557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 02:31:59.660587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:31:59.660595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 02:31:59.660603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 02:31:59.660670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 02:31:59.660719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:31:59.660731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 02:31:59.660750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 02:31:59.660759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 02:31:59.660768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:31:59.660783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 02:31:59.660800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 02:32:00.696261 | orchestrator | 2026-04-06 02:32:00.696377 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-06 02:32:00.696396 | orchestrator | Monday 06 April 2026 02:31:59 +0000 (0:00:03.746) 0:02:46.940 ********** 2026-04-06 02:32:00.696427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 02:32:00.696438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:32:00.696446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 02:32:00.696454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 02:32:00.696462 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:00.696483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 02:32:00.696506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:32:00.696521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 02:32:00.696529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 02:32:00.696536 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:00.696543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 02:32:00.696554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:32:00.696561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 02:32:00.696575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 02:32:12.468642 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:12.468824 | orchestrator | 2026-04-06 02:32:12.468841 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-06 02:32:12.468853 | orchestrator | Monday 06 April 2026 02:32:00 +0000 (0:00:01.069) 0:02:48.010 ********** 2026-04-06 02:32:12.468863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-06 02:32:12.468875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-06 02:32:12.468886 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:12.468895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-06 02:32:12.468904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-06 02:32:12.468912 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:12.468920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-06 02:32:12.468929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-06 02:32:12.468937 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:12.468946 | orchestrator | 2026-04-06 02:32:12.468954 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-06 02:32:12.468963 | orchestrator | Monday 06 April 2026 02:32:01 +0000 (0:00:00.936) 0:02:48.946 ********** 2026-04-06 02:32:12.468971 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:32:12.468979 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:32:12.468987 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:32:12.468996 | orchestrator | 2026-04-06 02:32:12.469004 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-06 02:32:12.469012 | orchestrator | Monday 06 April 2026 02:32:03 +0000 (0:00:01.300) 0:02:50.247 ********** 2026-04-06 02:32:12.469020 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:32:12.469029 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:32:12.469037 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:32:12.469045 | orchestrator | 2026-04-06 02:32:12.469053 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-06 02:32:12.469062 | orchestrator | Monday 06 April 2026 02:32:05 +0000 (0:00:02.110) 0:02:52.358 ********** 2026-04-06 02:32:12.469070 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:32:12.469078 | orchestrator | 2026-04-06 02:32:12.469087 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-06 02:32:12.469095 | orchestrator | Monday 06 April 2026 02:32:06 +0000 (0:00:01.464) 0:02:53.822 ********** 2026-04-06 02:32:12.469104 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 02:32:12.469112 | orchestrator | 2026-04-06 02:32:12.469140 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-06 02:32:12.469149 | orchestrator | Monday 06 April 2026 02:32:09 +0000 (0:00:03.206) 0:02:57.029 ********** 2026-04-06 02:32:12.469196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:32:12.469212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 02:32:12.469228 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:12.469252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:32:12.469278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 02:32:12.469303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:32:15.035261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 02:32:15.035371 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:15.035388 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:15.035400 | orchestrator | 2026-04-06 02:32:15.035412 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-06 02:32:15.035424 | orchestrator | Monday 06 April 2026 02:32:12 +0000 (0:00:02.636) 0:02:59.665 ********** 2026-04-06 02:32:15.035480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:32:15.035496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 02:32:15.035508 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:15.035542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:32:15.035573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 02:32:15.035585 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:15.035597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:32:15.035618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 02:32:25.321500 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:25.321582 | orchestrator | 2026-04-06 02:32:25.321588 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-06 02:32:25.321594 | orchestrator | Monday 06 April 2026 02:32:15 +0000 (0:00:02.566) 0:03:02.231 ********** 2026-04-06 02:32:25.321600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 02:32:25.321648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 02:32:25.321654 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:25.321658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 02:32:25.321662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 02:32:25.321666 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:25.321670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 02:32:25.321674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 02:32:25.321678 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:25.321682 | orchestrator | 2026-04-06 02:32:25.321686 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-06 02:32:25.321723 | orchestrator | Monday 06 April 2026 02:32:18 +0000 (0:00:03.099) 0:03:05.330 ********** 2026-04-06 02:32:25.321727 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:32:25.321745 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:32:25.321749 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:32:25.321753 | orchestrator | 2026-04-06 02:32:25.321757 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-06 02:32:25.321761 | orchestrator | Monday 06 April 2026 02:32:20 +0000 (0:00:02.189) 0:03:07.519 ********** 2026-04-06 02:32:25.321765 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:25.321768 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:25.321772 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:25.321776 | orchestrator | 2026-04-06 02:32:25.321780 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-06 02:32:25.321784 | orchestrator | Monday 06 April 2026 02:32:21 +0000 (0:00:01.598) 0:03:09.118 ********** 2026-04-06 02:32:25.321787 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:25.321791 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:25.321795 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:25.321799 | orchestrator | 2026-04-06 02:32:25.321802 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-06 02:32:25.321806 | orchestrator | Monday 06 April 2026 02:32:22 +0000 (0:00:00.351) 0:03:09.469 ********** 2026-04-06 02:32:25.321810 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:32:25.321814 | orchestrator | 2026-04-06 02:32:25.321818 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-06 02:32:25.321822 | orchestrator | Monday 06 April 2026 02:32:23 +0000 (0:00:01.422) 0:03:10.892 ********** 2026-04-06 02:32:25.321830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-06 02:32:25.321838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-06 02:32:25.321842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-06 02:32:25.321846 | orchestrator | 2026-04-06 02:32:25.321850 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-06 02:32:25.321858 | orchestrator | Monday 06 April 2026 02:32:25 +0000 (0:00:01.520) 0:03:12.413 ********** 2026-04-06 02:32:25.321865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-06 02:32:34.492271 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:34.492387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-06 02:32:34.492406 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:34.492420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-06 02:32:34.492432 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:34.492443 | orchestrator | 2026-04-06 02:32:34.492456 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-06 02:32:34.492468 | orchestrator | Monday 06 April 2026 02:32:25 +0000 (0:00:00.432) 0:03:12.846 ********** 2026-04-06 02:32:34.492481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-06 02:32:34.492495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-06 02:32:34.492506 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:34.492518 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:34.492529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-06 02:32:34.492564 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:34.492577 | orchestrator | 2026-04-06 02:32:34.492632 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-06 02:32:34.492646 | orchestrator | Monday 06 April 2026 02:32:26 +0000 (0:00:00.997) 0:03:13.843 ********** 2026-04-06 02:32:34.492657 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:34.492668 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:34.492679 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:34.492724 | orchestrator | 2026-04-06 02:32:34.492744 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-06 02:32:34.492762 | orchestrator | Monday 06 April 2026 02:32:27 +0000 (0:00:00.484) 0:03:14.328 ********** 2026-04-06 02:32:34.492782 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:34.492803 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:34.492822 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:34.492841 | orchestrator | 2026-04-06 02:32:34.492857 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-06 02:32:34.492871 | orchestrator | Monday 06 April 2026 02:32:28 +0000 (0:00:01.348) 0:03:15.676 ********** 2026-04-06 02:32:34.492885 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:34.492897 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:34.492909 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:34.492922 | orchestrator | 2026-04-06 02:32:34.492935 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-06 02:32:34.492948 | orchestrator | Monday 06 April 2026 02:32:28 +0000 (0:00:00.359) 0:03:16.036 ********** 2026-04-06 02:32:34.492961 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:32:34.492973 | orchestrator | 2026-04-06 02:32:34.492984 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-06 02:32:34.492995 | orchestrator | Monday 06 April 2026 02:32:30 +0000 (0:00:01.584) 0:03:17.620 ********** 2026-04-06 02:32:34.493028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 02:32:34.493051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.493064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.493087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.493099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-06 02:32:34.493122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.590946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:34.591058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:34.591073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.591114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:34.591128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.591141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-06 02:32:34.591172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 02:32:34.591194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:34.591208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.591223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.591232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 02:32:34.591249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.713380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 02:32:34.713514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.713531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:34.713545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.713561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-06 02:32:34.713593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.713612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.713633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:34.713647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:34.713658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:34.713670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-06 02:32:34.713752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:35.002537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:35.002650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:35.002665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:35.002677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:35.002716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:35.002733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-06 02:32:35.002761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:35.002783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:35.002793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:35.002803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:35.002813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 02:32:35.002825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:35.002845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:36.236732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-06 02:32:36.236829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:36.236844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.236856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 02:32:36.236865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:36.236872 | orchestrator | 2026-04-06 02:32:36.236878 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-06 02:32:36.236905 | orchestrator | Monday 06 April 2026 02:32:34 +0000 (0:00:04.583) 0:03:22.203 ********** 2026-04-06 02:32:36.236938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 02:32:36.236946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.236953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.236958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.236963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-06 02:32:36.236981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.323010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:36.323117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:36.323134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.323144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:36.323153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.323185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 02:32:36.323232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-06 02:32:36.323248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.323262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:36.323276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.323291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.323313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.323341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 02:32:36.437465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-06 02:32:36.437539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 02:32:36.437555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:36.437581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.437592 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:36.437618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.437634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.437640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:36.437646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.437653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:36.437658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-06 02:32:36.437663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.437672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.665209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:36.665287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:36.665336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.665377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:36.665387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-06 02:32:36.665399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.665423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:36.665440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.665446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:36.665472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 02:32:36.665485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:36.665492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:36.665505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-06 02:32:47.375032 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:47.375142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-06 02:32:47.375161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 02:32:47.375199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 02:32:47.375228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 02:32:47.375240 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:47.375250 | orchestrator | 2026-04-06 02:32:47.375261 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-06 02:32:47.375272 | orchestrator | Monday 06 April 2026 02:32:36 +0000 (0:00:01.652) 0:03:23.855 ********** 2026-04-06 02:32:47.375283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-06 02:32:47.375295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-06 02:32:47.375307 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:47.375316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-06 02:32:47.375326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-06 02:32:47.375336 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:47.375363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-06 02:32:47.375373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-06 02:32:47.375391 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:47.375400 | orchestrator | 2026-04-06 02:32:47.375411 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-06 02:32:47.375421 | orchestrator | Monday 06 April 2026 02:32:38 +0000 (0:00:02.201) 0:03:26.057 ********** 2026-04-06 02:32:47.375431 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:32:47.375440 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:32:47.375450 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:32:47.375460 | orchestrator | 2026-04-06 02:32:47.375470 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-06 02:32:47.375480 | orchestrator | Monday 06 April 2026 02:32:40 +0000 (0:00:01.344) 0:03:27.402 ********** 2026-04-06 02:32:47.375489 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:32:47.375499 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:32:47.375508 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:32:47.375518 | orchestrator | 2026-04-06 02:32:47.375527 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-06 02:32:47.375559 | orchestrator | Monday 06 April 2026 02:32:42 +0000 (0:00:02.166) 0:03:29.568 ********** 2026-04-06 02:32:47.375583 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:32:47.375594 | orchestrator | 2026-04-06 02:32:47.375605 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-06 02:32:47.375616 | orchestrator | Monday 06 April 2026 02:32:43 +0000 (0:00:01.271) 0:03:30.840 ********** 2026-04-06 02:32:47.375630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 02:32:47.375650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 02:32:47.375663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 02:32:47.375682 | orchestrator | 2026-04-06 02:32:47.375722 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-06 02:32:59.198266 | orchestrator | Monday 06 April 2026 02:32:47 +0000 (0:00:03.726) 0:03:34.566 ********** 2026-04-06 02:32:59.198428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 02:32:59.198461 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:59.198483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 02:32:59.198503 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:59.198543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 02:32:59.198557 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:59.198569 | orchestrator | 2026-04-06 02:32:59.198580 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-06 02:32:59.198592 | orchestrator | Monday 06 April 2026 02:32:47 +0000 (0:00:00.608) 0:03:35.175 ********** 2026-04-06 02:32:59.198605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-06 02:32:59.198663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-06 02:32:59.198720 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:32:59.198739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-06 02:32:59.198758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-06 02:32:59.198781 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:32:59.198827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-06 02:32:59.198847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-06 02:32:59.198863 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:32:59.198876 | orchestrator | 2026-04-06 02:32:59.198890 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-06 02:32:59.198903 | orchestrator | Monday 06 April 2026 02:32:48 +0000 (0:00:00.870) 0:03:36.046 ********** 2026-04-06 02:32:59.198916 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:32:59.198929 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:32:59.198942 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:32:59.198955 | orchestrator | 2026-04-06 02:32:59.198967 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-06 02:32:59.198980 | orchestrator | Monday 06 April 2026 02:32:51 +0000 (0:00:02.256) 0:03:38.303 ********** 2026-04-06 02:32:59.198993 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:32:59.199005 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:32:59.199017 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:32:59.199030 | orchestrator | 2026-04-06 02:32:59.199043 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-06 02:32:59.199056 | orchestrator | Monday 06 April 2026 02:32:53 +0000 (0:00:01.994) 0:03:40.298 ********** 2026-04-06 02:32:59.199070 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:32:59.199082 | orchestrator | 2026-04-06 02:32:59.199093 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-06 02:32:59.199104 | orchestrator | Monday 06 April 2026 02:32:54 +0000 (0:00:01.723) 0:03:42.022 ********** 2026-04-06 02:32:59.199120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 02:32:59.199154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:32:59.199168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:32:59.199191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 02:33:00.540254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:33:00.540371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:33:00.540411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 02:33:00.540419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:33:00.540424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:33:00.540429 | orchestrator | 2026-04-06 02:33:00.540438 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-06 02:33:00.540451 | orchestrator | Monday 06 April 2026 02:32:59 +0000 (0:00:04.371) 0:03:46.394 ********** 2026-04-06 02:33:00.540490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 02:33:00.540511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:33:00.540520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:33:00.540525 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:00.540531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 02:33:00.540541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:33:12.171208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:33:12.171281 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:12.171305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 02:33:12.171324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 02:33:12.171330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 02:33:12.171336 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:33:12.171341 | orchestrator | 2026-04-06 02:33:12.171347 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-06 02:33:12.171353 | orchestrator | Monday 06 April 2026 02:33:00 +0000 (0:00:01.343) 0:03:47.737 ********** 2026-04-06 02:33:12.171359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171394 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:12.171400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171425 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:12.171431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-06 02:33:12.171454 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:33:12.171459 | orchestrator | 2026-04-06 02:33:12.171465 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-06 02:33:12.171470 | orchestrator | Monday 06 April 2026 02:33:01 +0000 (0:00:00.998) 0:03:48.735 ********** 2026-04-06 02:33:12.171475 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:33:12.171480 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:33:12.171486 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:33:12.171491 | orchestrator | 2026-04-06 02:33:12.171496 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-06 02:33:12.171501 | orchestrator | Monday 06 April 2026 02:33:02 +0000 (0:00:01.439) 0:03:50.175 ********** 2026-04-06 02:33:12.171506 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:33:12.171511 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:33:12.171516 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:33:12.171522 | orchestrator | 2026-04-06 02:33:12.171527 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-06 02:33:12.171532 | orchestrator | Monday 06 April 2026 02:33:05 +0000 (0:00:02.333) 0:03:52.509 ********** 2026-04-06 02:33:12.171537 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:33:12.171542 | orchestrator | 2026-04-06 02:33:12.171547 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-06 02:33:12.171552 | orchestrator | Monday 06 April 2026 02:33:07 +0000 (0:00:01.803) 0:03:54.312 ********** 2026-04-06 02:33:12.171557 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-06 02:33:12.171563 | orchestrator | 2026-04-06 02:33:12.171568 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-06 02:33:12.171573 | orchestrator | Monday 06 April 2026 02:33:08 +0000 (0:00:00.962) 0:03:55.275 ********** 2026-04-06 02:33:12.171580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-06 02:33:12.171593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-06 02:33:23.327337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-06 02:33:23.327454 | orchestrator | 2026-04-06 02:33:23.327473 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-06 02:33:23.327487 | orchestrator | Monday 06 April 2026 02:33:12 +0000 (0:00:04.095) 0:03:59.371 ********** 2026-04-06 02:33:23.327501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 02:33:23.327517 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:23.327558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 02:33:23.327579 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:23.327597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 02:33:23.327616 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:33:23.327635 | orchestrator | 2026-04-06 02:33:23.327654 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-06 02:33:23.327675 | orchestrator | Monday 06 April 2026 02:33:13 +0000 (0:00:01.229) 0:04:00.601 ********** 2026-04-06 02:33:23.327749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 02:33:23.327766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 02:33:23.327804 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:23.327816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 02:33:23.327828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 02:33:23.327839 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:23.327851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 02:33:23.327862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 02:33:23.327894 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:33:23.327908 | orchestrator | 2026-04-06 02:33:23.327921 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-06 02:33:23.327933 | orchestrator | Monday 06 April 2026 02:33:14 +0000 (0:00:01.442) 0:04:02.043 ********** 2026-04-06 02:33:23.327944 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:33:23.327955 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:33:23.327966 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:33:23.327977 | orchestrator | 2026-04-06 02:33:23.327988 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-06 02:33:23.327999 | orchestrator | Monday 06 April 2026 02:33:17 +0000 (0:00:02.305) 0:04:04.349 ********** 2026-04-06 02:33:23.328010 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:33:23.328021 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:33:23.328031 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:33:23.328042 | orchestrator | 2026-04-06 02:33:23.328053 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-06 02:33:23.328064 | orchestrator | Monday 06 April 2026 02:33:19 +0000 (0:00:02.587) 0:04:06.936 ********** 2026-04-06 02:33:23.328077 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-06 02:33:23.328088 | orchestrator | 2026-04-06 02:33:23.328099 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-06 02:33:23.328110 | orchestrator | Monday 06 April 2026 02:33:20 +0000 (0:00:01.156) 0:04:08.092 ********** 2026-04-06 02:33:23.328130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 02:33:23.328143 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:23.328155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 02:33:23.328175 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:23.328187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 02:33:23.328198 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:33:23.328209 | orchestrator | 2026-04-06 02:33:23.328221 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-06 02:33:23.328232 | orchestrator | Monday 06 April 2026 02:33:21 +0000 (0:00:01.086) 0:04:09.179 ********** 2026-04-06 02:33:23.328243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 02:33:23.328260 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:23.328286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 02:33:23.328322 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:47.544964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 02:33:47.545115 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:33:47.545143 | orchestrator | 2026-04-06 02:33:47.545165 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-06 02:33:47.545185 | orchestrator | Monday 06 April 2026 02:33:23 +0000 (0:00:01.341) 0:04:10.521 ********** 2026-04-06 02:33:47.545205 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:47.545223 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:47.545241 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:33:47.545297 | orchestrator | 2026-04-06 02:33:47.545317 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-06 02:33:47.545336 | orchestrator | Monday 06 April 2026 02:33:24 +0000 (0:00:01.641) 0:04:12.162 ********** 2026-04-06 02:33:47.545355 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:33:47.545373 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:33:47.545392 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:33:47.545410 | orchestrator | 2026-04-06 02:33:47.545427 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-06 02:33:47.545445 | orchestrator | Monday 06 April 2026 02:33:27 +0000 (0:00:02.729) 0:04:14.892 ********** 2026-04-06 02:33:47.545499 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:33:47.545520 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:33:47.545539 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:33:47.545556 | orchestrator | 2026-04-06 02:33:47.545592 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-06 02:33:47.545612 | orchestrator | Monday 06 April 2026 02:33:30 +0000 (0:00:02.706) 0:04:17.599 ********** 2026-04-06 02:33:47.545630 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-06 02:33:47.545650 | orchestrator | 2026-04-06 02:33:47.545669 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-06 02:33:47.545717 | orchestrator | Monday 06 April 2026 02:33:31 +0000 (0:00:01.279) 0:04:18.878 ********** 2026-04-06 02:33:47.545738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 02:33:47.545758 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:47.545777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 02:33:47.545794 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:47.545812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 02:33:47.545831 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:33:47.545849 | orchestrator | 2026-04-06 02:33:47.545868 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-06 02:33:47.545886 | orchestrator | Monday 06 April 2026 02:33:32 +0000 (0:00:01.327) 0:04:20.206 ********** 2026-04-06 02:33:47.545937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 02:33:47.545958 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:47.545977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 02:33:47.546092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 02:33:47.546122 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:47.546140 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:33:47.546161 | orchestrator | 2026-04-06 02:33:47.546193 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-06 02:33:47.546214 | orchestrator | Monday 06 April 2026 02:33:34 +0000 (0:00:01.472) 0:04:21.679 ********** 2026-04-06 02:33:47.546233 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:47.546252 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:47.546272 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:33:47.546292 | orchestrator | 2026-04-06 02:33:47.546313 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-06 02:33:47.546333 | orchestrator | Monday 06 April 2026 02:33:36 +0000 (0:00:01.934) 0:04:23.614 ********** 2026-04-06 02:33:47.546354 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:33:47.546375 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:33:47.546395 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:33:47.546415 | orchestrator | 2026-04-06 02:33:47.546434 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-06 02:33:47.546455 | orchestrator | Monday 06 April 2026 02:33:38 +0000 (0:00:02.498) 0:04:26.112 ********** 2026-04-06 02:33:47.546475 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:33:47.546495 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:33:47.546515 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:33:47.546535 | orchestrator | 2026-04-06 02:33:47.546555 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-06 02:33:47.546576 | orchestrator | Monday 06 April 2026 02:33:42 +0000 (0:00:03.391) 0:04:29.503 ********** 2026-04-06 02:33:47.546597 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:33:47.546617 | orchestrator | 2026-04-06 02:33:47.546638 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-06 02:33:47.546658 | orchestrator | Monday 06 April 2026 02:33:44 +0000 (0:00:01.758) 0:04:31.262 ********** 2026-04-06 02:33:47.546720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 02:33:47.546744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 02:33:47.546802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.328339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.328451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:33:48.328466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 02:33:48.328477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 02:33:48.328487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.328546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.328572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 02:33:48.328581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:33:48.328590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 02:33:48.328599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.328607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.328651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:33:48.328661 | orchestrator | 2026-04-06 02:33:48.328671 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-06 02:33:48.328733 | orchestrator | Monday 06 April 2026 02:33:47 +0000 (0:00:03.632) 0:04:34.894 ********** 2026-04-06 02:33:48.328755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 02:33:48.490558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 02:33:48.490672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.490737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.490749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:33:48.490785 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:33:48.490798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 02:33:48.490809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 02:33:48.490853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.490865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.490876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:33:48.490905 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:33:48.490915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 02:33:48.490926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 02:33:48.490936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 02:33:48.490959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 02:34:00.895382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 02:34:00.895515 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:00.895534 | orchestrator | 2026-04-06 02:34:00.895547 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-06 02:34:00.895561 | orchestrator | Monday 06 April 2026 02:33:48 +0000 (0:00:00.793) 0:04:35.688 ********** 2026-04-06 02:34:00.895573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 02:34:00.895614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 02:34:00.895628 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:00.895640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 02:34:00.895651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 02:34:00.895662 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:00.895769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 02:34:00.895784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 02:34:00.895795 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:00.895806 | orchestrator | 2026-04-06 02:34:00.895818 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-06 02:34:00.895829 | orchestrator | Monday 06 April 2026 02:33:49 +0000 (0:00:01.026) 0:04:36.715 ********** 2026-04-06 02:34:00.895840 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:34:00.895851 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:34:00.895862 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:34:00.895873 | orchestrator | 2026-04-06 02:34:00.895883 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-06 02:34:00.895895 | orchestrator | Monday 06 April 2026 02:33:51 +0000 (0:00:01.802) 0:04:38.517 ********** 2026-04-06 02:34:00.895905 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:34:00.895916 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:34:00.895928 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:34:00.895939 | orchestrator | 2026-04-06 02:34:00.895950 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-06 02:34:00.895961 | orchestrator | Monday 06 April 2026 02:33:53 +0000 (0:00:02.203) 0:04:40.721 ********** 2026-04-06 02:34:00.895972 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:34:00.895984 | orchestrator | 2026-04-06 02:34:00.895995 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-06 02:34:00.896005 | orchestrator | Monday 06 April 2026 02:33:55 +0000 (0:00:01.542) 0:04:42.263 ********** 2026-04-06 02:34:00.896035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:34:00.896075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:34:00.896100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:34:00.896114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:34:00.896134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:34:00.896156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:34:02.938981 | orchestrator | 2026-04-06 02:34:02.939099 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-06 02:34:02.939121 | orchestrator | Monday 06 April 2026 02:34:00 +0000 (0:00:05.821) 0:04:48.084 ********** 2026-04-06 02:34:02.939140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-06 02:34:02.939161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-06 02:34:02.939177 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:02.939214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-06 02:34:02.939233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-06 02:34:02.939292 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:02.939304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-06 02:34:02.939315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-06 02:34:02.939325 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:02.939334 | orchestrator | 2026-04-06 02:34:02.939343 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-06 02:34:02.939354 | orchestrator | Monday 06 April 2026 02:34:01 +0000 (0:00:01.058) 0:04:49.142 ********** 2026-04-06 02:34:02.939370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-06 02:34:02.939390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-06 02:34:02.939413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-06 02:34:02.939439 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:02.939460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-06 02:34:02.939476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-06 02:34:02.939491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-06 02:34:02.939506 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:02.939521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-06 02:34:02.939536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-06 02:34:02.939568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-06 02:34:09.388548 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:09.388664 | orchestrator | 2026-04-06 02:34:09.388760 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-06 02:34:09.388775 | orchestrator | Monday 06 April 2026 02:34:02 +0000 (0:00:00.988) 0:04:50.131 ********** 2026-04-06 02:34:09.388787 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:09.388799 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:09.388811 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:09.388823 | orchestrator | 2026-04-06 02:34:09.388835 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-06 02:34:09.388847 | orchestrator | Monday 06 April 2026 02:34:03 +0000 (0:00:00.486) 0:04:50.617 ********** 2026-04-06 02:34:09.388858 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:09.388870 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:09.388881 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:09.388893 | orchestrator | 2026-04-06 02:34:09.388904 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-06 02:34:09.388916 | orchestrator | Monday 06 April 2026 02:34:04 +0000 (0:00:01.527) 0:04:52.145 ********** 2026-04-06 02:34:09.388928 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:34:09.388940 | orchestrator | 2026-04-06 02:34:09.388951 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-06 02:34:09.388963 | orchestrator | Monday 06 April 2026 02:34:06 +0000 (0:00:01.871) 0:04:54.016 ********** 2026-04-06 02:34:09.388978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-06 02:34:09.389021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 02:34:09.389052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:09.389068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:09.389084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 02:34:09.389122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-06 02:34:09.389139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-06 02:34:09.389153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 02:34:09.389178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 02:34:09.389198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:09.389215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:09.389228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:09.389251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 02:34:11.157307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:11.157396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 02:34:11.157424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-06 02:34:11.157447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-06 02:34:11.157454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:11.157461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:11.157547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 02:34:11.157554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-06 02:34:11.157571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-06 02:34:11.157577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-06 02:34:11.157588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-06 02:34:12.006188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.006324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.006343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.006356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.006385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 02:34:12.006397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 02:34:12.006409 | orchestrator | 2026-04-06 02:34:12.006422 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-06 02:34:12.006436 | orchestrator | Monday 06 April 2026 02:34:11 +0000 (0:00:04.473) 0:04:58.490 ********** 2026-04-06 02:34:12.006449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-06 02:34:12.006483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 02:34:12.006504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.006516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.006529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 02:34:12.006567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-06 02:34:12.006582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-06 02:34:12.006603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.133252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.133343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-06 02:34:12.133371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 02:34:12.133380 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:12.133388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 02:34:12.133395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.133401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.133422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-06 02:34:12.133454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 02:34:12.133462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 02:34:12.133477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-06 02:34:12.133484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.133491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-06 02:34:12.133505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:12.133517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:13.832830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 02:34:13.832933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-06 02:34:13.832950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:13.832963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-06 02:34:13.832997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 02:34:13.833010 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:13.833024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:13.833054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 02:34:13.833064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 02:34:13.833071 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:13.833078 | orchestrator | 2026-04-06 02:34:13.833086 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-06 02:34:13.833094 | orchestrator | Monday 06 April 2026 02:34:12 +0000 (0:00:00.999) 0:04:59.489 ********** 2026-04-06 02:34:13.833107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-06 02:34:13.833117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-06 02:34:13.833127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-06 02:34:13.833136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-06 02:34:13.833145 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:13.833152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-06 02:34:13.833165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-06 02:34:13.833172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-06 02:34:13.833179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-06 02:34:13.833186 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:13.833193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-06 02:34:13.833200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-06 02:34:13.833207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-06 02:34:13.833218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-06 02:34:21.912490 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:21.912619 | orchestrator | 2026-04-06 02:34:21.912637 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-06 02:34:21.912650 | orchestrator | Monday 06 April 2026 02:34:13 +0000 (0:00:01.521) 0:05:01.011 ********** 2026-04-06 02:34:21.912661 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:21.912740 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:21.912754 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:21.912764 | orchestrator | 2026-04-06 02:34:21.912776 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-06 02:34:21.912787 | orchestrator | Monday 06 April 2026 02:34:14 +0000 (0:00:00.507) 0:05:01.518 ********** 2026-04-06 02:34:21.912798 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:21.912809 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:21.912820 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:21.912831 | orchestrator | 2026-04-06 02:34:21.912842 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-06 02:34:21.912853 | orchestrator | Monday 06 April 2026 02:34:15 +0000 (0:00:01.458) 0:05:02.976 ********** 2026-04-06 02:34:21.912867 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:34:21.912885 | orchestrator | 2026-04-06 02:34:21.912909 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-06 02:34:21.912956 | orchestrator | Monday 06 April 2026 02:34:17 +0000 (0:00:01.938) 0:05:04.915 ********** 2026-04-06 02:34:21.912981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:34:21.913058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:34:21.913119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:34:21.913142 | orchestrator | 2026-04-06 02:34:21.913160 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-06 02:34:21.913206 | orchestrator | Monday 06 April 2026 02:34:19 +0000 (0:00:02.218) 0:05:07.134 ********** 2026-04-06 02:34:21.913235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 02:34:21.913268 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:21.913288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 02:34:21.913309 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:21.913329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 02:34:21.913349 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:21.913367 | orchestrator | 2026-04-06 02:34:21.913385 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-06 02:34:21.913404 | orchestrator | Monday 06 April 2026 02:34:20 +0000 (0:00:00.477) 0:05:07.611 ********** 2026-04-06 02:34:21.913425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-06 02:34:21.913445 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:21.913473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-06 02:34:21.913493 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:21.913510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-06 02:34:21.913528 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:21.913544 | orchestrator | 2026-04-06 02:34:21.913555 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-06 02:34:21.913566 | orchestrator | Monday 06 April 2026 02:34:21 +0000 (0:00:01.022) 0:05:08.634 ********** 2026-04-06 02:34:21.913588 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:32.598450 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:32.598574 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:32.598589 | orchestrator | 2026-04-06 02:34:32.598602 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-06 02:34:32.598616 | orchestrator | Monday 06 April 2026 02:34:21 +0000 (0:00:00.481) 0:05:09.116 ********** 2026-04-06 02:34:32.598628 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:32.598716 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:32.598730 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:32.598742 | orchestrator | 2026-04-06 02:34:32.598753 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-06 02:34:32.598765 | orchestrator | Monday 06 April 2026 02:34:23 +0000 (0:00:01.505) 0:05:10.621 ********** 2026-04-06 02:34:32.598777 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:34:32.598789 | orchestrator | 2026-04-06 02:34:32.598801 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-06 02:34:32.598813 | orchestrator | Monday 06 April 2026 02:34:25 +0000 (0:00:01.672) 0:05:12.293 ********** 2026-04-06 02:34:32.598842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 02:34:32.598858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 02:34:32.598886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 02:34:32.598929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 02:34:32.598958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 02:34:32.598984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 02:34:32.599006 | orchestrator | 2026-04-06 02:34:32.599019 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-06 02:34:32.599046 | orchestrator | Monday 06 April 2026 02:34:31 +0000 (0:00:06.817) 0:05:19.111 ********** 2026-04-06 02:34:32.599059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-06 02:34:32.599090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-06 02:34:38.781540 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:38.781796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-06 02:34:38.781834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-06 02:34:38.781858 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:38.781880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-06 02:34:38.781902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-06 02:34:38.781947 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:38.781979 | orchestrator | 2026-04-06 02:34:38.781994 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-06 02:34:38.782074 | orchestrator | Monday 06 April 2026 02:34:32 +0000 (0:00:00.683) 0:05:19.794 ********** 2026-04-06 02:34:38.782113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782218 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:38.782230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782276 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:38.782287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-06 02:34:38.782333 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:38.782354 | orchestrator | 2026-04-06 02:34:38.782365 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-06 02:34:38.782377 | orchestrator | Monday 06 April 2026 02:34:33 +0000 (0:00:01.032) 0:05:20.826 ********** 2026-04-06 02:34:38.782388 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:34:38.782399 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:34:38.782410 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:34:38.782421 | orchestrator | 2026-04-06 02:34:38.782432 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-06 02:34:38.782443 | orchestrator | Monday 06 April 2026 02:34:35 +0000 (0:00:01.406) 0:05:22.232 ********** 2026-04-06 02:34:38.782455 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:34:38.782466 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:34:38.782477 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:34:38.782488 | orchestrator | 2026-04-06 02:34:38.782500 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-06 02:34:38.782511 | orchestrator | Monday 06 April 2026 02:34:37 +0000 (0:00:02.341) 0:05:24.574 ********** 2026-04-06 02:34:38.782522 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:38.782533 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:38.782544 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:38.782555 | orchestrator | 2026-04-06 02:34:38.782566 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-06 02:34:38.782577 | orchestrator | Monday 06 April 2026 02:34:38 +0000 (0:00:00.684) 0:05:25.258 ********** 2026-04-06 02:34:38.782588 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:38.782601 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:34:38.782621 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:34:38.782646 | orchestrator | 2026-04-06 02:34:38.782702 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-06 02:34:38.782721 | orchestrator | Monday 06 April 2026 02:34:38 +0000 (0:00:00.348) 0:05:25.607 ********** 2026-04-06 02:34:38.782740 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:34:38.782770 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:24.744367 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:24.744463 | orchestrator | 2026-04-06 02:35:24.744474 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-06 02:35:24.744482 | orchestrator | Monday 06 April 2026 02:34:38 +0000 (0:00:00.376) 0:05:25.984 ********** 2026-04-06 02:35:24.744488 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:24.744495 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:24.744501 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:24.744508 | orchestrator | 2026-04-06 02:35:24.744514 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-06 02:35:24.744520 | orchestrator | Monday 06 April 2026 02:34:39 +0000 (0:00:00.345) 0:05:26.330 ********** 2026-04-06 02:35:24.744527 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:24.744534 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:24.744540 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:24.744546 | orchestrator | 2026-04-06 02:35:24.744552 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-06 02:35:24.744572 | orchestrator | Monday 06 April 2026 02:34:39 +0000 (0:00:00.691) 0:05:27.022 ********** 2026-04-06 02:35:24.744579 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:24.744586 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:24.744592 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:24.744599 | orchestrator | 2026-04-06 02:35:24.744605 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-06 02:35:24.744611 | orchestrator | Monday 06 April 2026 02:34:40 +0000 (0:00:00.636) 0:05:27.658 ********** 2026-04-06 02:35:24.744617 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:35:24.744624 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:35:24.744630 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:35:24.744652 | orchestrator | 2026-04-06 02:35:24.744690 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-06 02:35:24.744724 | orchestrator | Monday 06 April 2026 02:34:41 +0000 (0:00:00.668) 0:05:28.326 ********** 2026-04-06 02:35:24.744731 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:35:24.744737 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:35:24.744744 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:35:24.744750 | orchestrator | 2026-04-06 02:35:24.744756 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-06 02:35:24.744762 | orchestrator | Monday 06 April 2026 02:34:41 +0000 (0:00:00.740) 0:05:29.067 ********** 2026-04-06 02:35:24.744768 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:35:24.744775 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:35:24.744781 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:35:24.744787 | orchestrator | 2026-04-06 02:35:24.744793 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-06 02:35:24.744799 | orchestrator | Monday 06 April 2026 02:34:42 +0000 (0:00:00.968) 0:05:30.036 ********** 2026-04-06 02:35:24.744805 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:35:24.744812 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:35:24.744818 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:35:24.744824 | orchestrator | 2026-04-06 02:35:24.744830 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-06 02:35:24.744836 | orchestrator | Monday 06 April 2026 02:34:43 +0000 (0:00:00.857) 0:05:30.893 ********** 2026-04-06 02:35:24.744843 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:35:24.744849 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:35:24.744855 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:35:24.744861 | orchestrator | 2026-04-06 02:35:24.744867 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-06 02:35:24.744874 | orchestrator | Monday 06 April 2026 02:34:44 +0000 (0:00:00.905) 0:05:31.798 ********** 2026-04-06 02:35:24.744880 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:35:24.744886 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:35:24.744892 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:35:24.744899 | orchestrator | 2026-04-06 02:35:24.744905 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-06 02:35:24.744911 | orchestrator | Monday 06 April 2026 02:34:54 +0000 (0:00:09.805) 0:05:41.604 ********** 2026-04-06 02:35:24.744917 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:35:24.744924 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:35:24.744930 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:35:24.744936 | orchestrator | 2026-04-06 02:35:24.744942 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-06 02:35:24.744948 | orchestrator | Monday 06 April 2026 02:34:55 +0000 (0:00:01.202) 0:05:42.806 ********** 2026-04-06 02:35:24.744955 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:35:24.744961 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:35:24.744967 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:35:24.744974 | orchestrator | 2026-04-06 02:35:24.744980 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-06 02:35:24.744986 | orchestrator | Monday 06 April 2026 02:35:05 +0000 (0:00:10.239) 0:05:53.046 ********** 2026-04-06 02:35:24.744993 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:35:24.744999 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:35:24.745005 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:35:24.745012 | orchestrator | 2026-04-06 02:35:24.745018 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-06 02:35:24.745024 | orchestrator | Monday 06 April 2026 02:35:10 +0000 (0:00:04.701) 0:05:57.748 ********** 2026-04-06 02:35:24.745030 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:35:24.745037 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:35:24.745043 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:35:24.745049 | orchestrator | 2026-04-06 02:35:24.745055 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-06 02:35:24.745062 | orchestrator | Monday 06 April 2026 02:35:15 +0000 (0:00:04.571) 0:06:02.319 ********** 2026-04-06 02:35:24.745076 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:24.745083 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:24.745089 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:24.745095 | orchestrator | 2026-04-06 02:35:24.745101 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-06 02:35:24.745120 | orchestrator | Monday 06 April 2026 02:35:15 +0000 (0:00:00.784) 0:06:03.104 ********** 2026-04-06 02:35:24.745126 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:24.745132 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:24.745139 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:24.745152 | orchestrator | 2026-04-06 02:35:24.745172 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-06 02:35:24.745179 | orchestrator | Monday 06 April 2026 02:35:16 +0000 (0:00:00.426) 0:06:03.531 ********** 2026-04-06 02:35:24.745185 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:24.745192 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:24.745198 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:24.745204 | orchestrator | 2026-04-06 02:35:24.745212 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-06 02:35:24.745223 | orchestrator | Monday 06 April 2026 02:35:16 +0000 (0:00:00.361) 0:06:03.892 ********** 2026-04-06 02:35:24.745233 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:24.745243 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:24.745253 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:24.745264 | orchestrator | 2026-04-06 02:35:24.745274 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-06 02:35:24.745285 | orchestrator | Monday 06 April 2026 02:35:17 +0000 (0:00:00.407) 0:06:04.299 ********** 2026-04-06 02:35:24.745295 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:24.745312 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:24.745320 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:24.745326 | orchestrator | 2026-04-06 02:35:24.745333 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-06 02:35:24.745339 | orchestrator | Monday 06 April 2026 02:35:17 +0000 (0:00:00.745) 0:06:05.045 ********** 2026-04-06 02:35:24.745345 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:24.745351 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:24.745357 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:24.745363 | orchestrator | 2026-04-06 02:35:24.745370 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-06 02:35:24.745376 | orchestrator | Monday 06 April 2026 02:35:18 +0000 (0:00:00.371) 0:06:05.416 ********** 2026-04-06 02:35:24.745382 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:35:24.745388 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:35:24.745394 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:35:24.745400 | orchestrator | 2026-04-06 02:35:24.745407 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-06 02:35:24.745413 | orchestrator | Monday 06 April 2026 02:35:22 +0000 (0:00:04.765) 0:06:10.182 ********** 2026-04-06 02:35:24.745419 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:35:24.745425 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:35:24.745431 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:35:24.745437 | orchestrator | 2026-04-06 02:35:24.745444 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:35:24.745452 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-06 02:35:24.745459 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-06 02:35:24.745466 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-06 02:35:24.745472 | orchestrator | 2026-04-06 02:35:24.745484 | orchestrator | 2026-04-06 02:35:24.745490 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:35:24.745497 | orchestrator | Monday 06 April 2026 02:35:23 +0000 (0:00:00.844) 0:06:11.026 ********** 2026-04-06 02:35:24.745503 | orchestrator | =============================================================================== 2026-04-06 02:35:24.745509 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.24s 2026-04-06 02:35:24.745515 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.81s 2026-04-06 02:35:24.745521 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.82s 2026-04-06 02:35:24.745528 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.82s 2026-04-06 02:35:24.745534 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.77s 2026-04-06 02:35:24.745540 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.70s 2026-04-06 02:35:24.745546 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.58s 2026-04-06 02:35:24.745552 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.57s 2026-04-06 02:35:24.745558 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.47s 2026-04-06 02:35:24.745564 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.37s 2026-04-06 02:35:24.745570 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.32s 2026-04-06 02:35:24.745576 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.29s 2026-04-06 02:35:24.745583 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.10s 2026-04-06 02:35:24.745589 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.03s 2026-04-06 02:35:24.745595 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.00s 2026-04-06 02:35:24.745601 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.75s 2026-04-06 02:35:24.745607 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.73s 2026-04-06 02:35:24.745623 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.66s 2026-04-06 02:35:24.745630 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.63s 2026-04-06 02:35:24.745636 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.56s 2026-04-06 02:35:27.282940 | orchestrator | 2026-04-06 02:35:27 | INFO  | Task a3dfe7c9-ad9d-4436-a8d0-acf02aa7cbba (opensearch) was prepared for execution. 2026-04-06 02:35:27.283039 | orchestrator | 2026-04-06 02:35:27 | INFO  | It takes a moment until task a3dfe7c9-ad9d-4436-a8d0-acf02aa7cbba (opensearch) has been started and output is visible here. 2026-04-06 02:35:38.635506 | orchestrator | 2026-04-06 02:35:38.635604 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 02:35:38.635615 | orchestrator | 2026-04-06 02:35:38.635622 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 02:35:38.635629 | orchestrator | Monday 06 April 2026 02:35:31 +0000 (0:00:00.263) 0:00:00.263 ********** 2026-04-06 02:35:38.635635 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:35:38.635642 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:35:38.635648 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:35:38.635672 | orchestrator | 2026-04-06 02:35:38.635680 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 02:35:38.635700 | orchestrator | Monday 06 April 2026 02:35:32 +0000 (0:00:00.315) 0:00:00.578 ********** 2026-04-06 02:35:38.635708 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-06 02:35:38.635715 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-06 02:35:38.635721 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-06 02:35:38.635726 | orchestrator | 2026-04-06 02:35:38.635732 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-06 02:35:38.635756 | orchestrator | 2026-04-06 02:35:38.635763 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-06 02:35:38.635769 | orchestrator | Monday 06 April 2026 02:35:32 +0000 (0:00:00.464) 0:00:01.043 ********** 2026-04-06 02:35:38.635776 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:35:38.635782 | orchestrator | 2026-04-06 02:35:38.635789 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-06 02:35:38.635795 | orchestrator | Monday 06 April 2026 02:35:33 +0000 (0:00:00.535) 0:00:01.578 ********** 2026-04-06 02:35:38.635800 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-06 02:35:38.635806 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-06 02:35:38.635813 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-06 02:35:38.635818 | orchestrator | 2026-04-06 02:35:38.635824 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-06 02:35:38.635829 | orchestrator | Monday 06 April 2026 02:35:33 +0000 (0:00:00.674) 0:00:02.253 ********** 2026-04-06 02:35:38.635839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:38.635849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:38.635873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:38.635886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:35:38.635898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:35:38.635905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:35:38.635912 | orchestrator | 2026-04-06 02:35:38.635918 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-06 02:35:38.635924 | orchestrator | Monday 06 April 2026 02:35:35 +0000 (0:00:01.771) 0:00:04.024 ********** 2026-04-06 02:35:38.635930 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:35:38.635936 | orchestrator | 2026-04-06 02:35:38.635942 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-06 02:35:38.635948 | orchestrator | Monday 06 April 2026 02:35:36 +0000 (0:00:00.610) 0:00:04.635 ********** 2026-04-06 02:35:38.635963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:39.485927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:39.486118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:39.486154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:35:39.486170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:35:39.486243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:35:39.486260 | orchestrator | 2026-04-06 02:35:39.486272 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-06 02:35:39.486285 | orchestrator | Monday 06 April 2026 02:35:38 +0000 (0:00:02.487) 0:00:07.123 ********** 2026-04-06 02:35:39.486298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-06 02:35:39.486310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-06 02:35:39.486323 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:39.486336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-06 02:35:39.486369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-06 02:35:40.621353 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:40.621483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-06 02:35:40.621516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-06 02:35:40.621532 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:40.621544 | orchestrator | 2026-04-06 02:35:40.621557 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-06 02:35:40.621570 | orchestrator | Monday 06 April 2026 02:35:39 +0000 (0:00:00.851) 0:00:07.975 ********** 2026-04-06 02:35:40.621607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-06 02:35:40.621634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-06 02:35:40.621702 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:35:40.621718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-06 02:35:40.621731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-06 02:35:40.621743 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:35:40.621766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-06 02:35:40.621810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-06 02:35:40.621841 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:35:40.621861 | orchestrator | 2026-04-06 02:35:40.621880 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-06 02:35:40.621907 | orchestrator | Monday 06 April 2026 02:35:40 +0000 (0:00:01.126) 0:00:09.101 ********** 2026-04-06 02:35:48.813508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:48.813624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:48.813642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:48.813800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:35:48.813841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:35:48.813856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:35:48.813880 | orchestrator | 2026-04-06 02:35:48.813893 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-06 02:35:48.813906 | orchestrator | Monday 06 April 2026 02:35:42 +0000 (0:00:02.324) 0:00:11.425 ********** 2026-04-06 02:35:48.813918 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:35:48.813931 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:35:48.813942 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:35:48.813953 | orchestrator | 2026-04-06 02:35:48.813964 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-06 02:35:48.813975 | orchestrator | Monday 06 April 2026 02:35:45 +0000 (0:00:02.345) 0:00:13.770 ********** 2026-04-06 02:35:48.813986 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:35:48.813997 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:35:48.814008 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:35:48.814081 | orchestrator | 2026-04-06 02:35:48.814096 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-06 02:35:48.814108 | orchestrator | Monday 06 April 2026 02:35:47 +0000 (0:00:01.820) 0:00:15.591 ********** 2026-04-06 02:35:48.814123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:48.814143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:35:48.814167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-06 02:38:35.069200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:38:35.069354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:38:35.069396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-06 02:38:35.069413 | orchestrator | 2026-04-06 02:38:35.069427 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-06 02:38:35.069442 | orchestrator | Monday 06 April 2026 02:35:48 +0000 (0:00:01.709) 0:00:17.301 ********** 2026-04-06 02:38:35.069454 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:38:35.069468 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:38:35.069480 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:38:35.069493 | orchestrator | 2026-04-06 02:38:35.069506 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-06 02:38:35.069519 | orchestrator | Monday 06 April 2026 02:35:49 +0000 (0:00:00.312) 0:00:17.613 ********** 2026-04-06 02:38:35.069531 | orchestrator | 2026-04-06 02:38:35.069544 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-06 02:38:35.069557 | orchestrator | Monday 06 April 2026 02:35:49 +0000 (0:00:00.064) 0:00:17.678 ********** 2026-04-06 02:38:35.069568 | orchestrator | 2026-04-06 02:38:35.069575 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-06 02:38:35.069593 | orchestrator | Monday 06 April 2026 02:35:49 +0000 (0:00:00.070) 0:00:17.749 ********** 2026-04-06 02:38:35.069601 | orchestrator | 2026-04-06 02:38:35.069608 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-06 02:38:35.069684 | orchestrator | Monday 06 April 2026 02:35:49 +0000 (0:00:00.069) 0:00:17.818 ********** 2026-04-06 02:38:35.069695 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:38:35.069702 | orchestrator | 2026-04-06 02:38:35.069710 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-06 02:38:35.069717 | orchestrator | Monday 06 April 2026 02:35:49 +0000 (0:00:00.235) 0:00:18.053 ********** 2026-04-06 02:38:35.069727 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:38:35.069735 | orchestrator | 2026-04-06 02:38:35.069744 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-06 02:38:35.069752 | orchestrator | Monday 06 April 2026 02:35:50 +0000 (0:00:00.697) 0:00:18.751 ********** 2026-04-06 02:38:35.069761 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:38:35.069769 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:38:35.069778 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:38:35.069786 | orchestrator | 2026-04-06 02:38:35.069795 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-06 02:38:35.069804 | orchestrator | Monday 06 April 2026 02:36:58 +0000 (0:01:08.565) 0:01:27.316 ********** 2026-04-06 02:38:35.069813 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:38:35.069821 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:38:35.069828 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:38:35.069835 | orchestrator | 2026-04-06 02:38:35.069844 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-06 02:38:35.069857 | orchestrator | Monday 06 April 2026 02:38:24 +0000 (0:01:25.304) 0:02:52.621 ********** 2026-04-06 02:38:35.069879 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:38:35.069890 | orchestrator | 2026-04-06 02:38:35.069902 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-06 02:38:35.069913 | orchestrator | Monday 06 April 2026 02:38:24 +0000 (0:00:00.554) 0:02:53.175 ********** 2026-04-06 02:38:35.069924 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:38:35.069934 | orchestrator | 2026-04-06 02:38:35.069945 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-06 02:38:35.069957 | orchestrator | Monday 06 April 2026 02:38:27 +0000 (0:00:02.816) 0:02:55.992 ********** 2026-04-06 02:38:35.069967 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:38:35.069978 | orchestrator | 2026-04-06 02:38:35.069990 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-06 02:38:35.070002 | orchestrator | Monday 06 April 2026 02:38:29 +0000 (0:00:02.184) 0:02:58.177 ********** 2026-04-06 02:38:35.070067 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:38:35.070078 | orchestrator | 2026-04-06 02:38:35.070086 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-06 02:38:35.070093 | orchestrator | Monday 06 April 2026 02:38:32 +0000 (0:00:02.829) 0:03:01.007 ********** 2026-04-06 02:38:35.070100 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:38:35.070108 | orchestrator | 2026-04-06 02:38:35.070115 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:38:35.070124 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 02:38:35.070133 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 02:38:35.070149 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 02:38:35.070156 | orchestrator | 2026-04-06 02:38:35.070164 | orchestrator | 2026-04-06 02:38:35.070178 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:38:35.070185 | orchestrator | Monday 06 April 2026 02:38:35 +0000 (0:00:02.529) 0:03:03.536 ********** 2026-04-06 02:38:35.070192 | orchestrator | =============================================================================== 2026-04-06 02:38:35.070200 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 85.30s 2026-04-06 02:38:35.070207 | orchestrator | opensearch : Restart opensearch container ------------------------------ 68.57s 2026-04-06 02:38:35.070214 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.83s 2026-04-06 02:38:35.070221 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.82s 2026-04-06 02:38:35.070229 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.53s 2026-04-06 02:38:35.070236 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.49s 2026-04-06 02:38:35.070243 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.35s 2026-04-06 02:38:35.070250 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.32s 2026-04-06 02:38:35.070257 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.18s 2026-04-06 02:38:35.070265 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.82s 2026-04-06 02:38:35.070272 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.77s 2026-04-06 02:38:35.070279 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.71s 2026-04-06 02:38:35.070286 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.13s 2026-04-06 02:38:35.070293 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.85s 2026-04-06 02:38:35.070307 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.70s 2026-04-06 02:38:35.070325 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2026-04-06 02:38:35.070351 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.61s 2026-04-06 02:38:35.480118 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-04-06 02:38:35.480206 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-04-06 02:38:35.480217 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-04-06 02:38:38.180269 | orchestrator | 2026-04-06 02:38:38 | INFO  | Task 35b5839e-754b-46bb-bd8e-cf2034767f59 (memcached) was prepared for execution. 2026-04-06 02:38:38.182006 | orchestrator | 2026-04-06 02:38:38 | INFO  | It takes a moment until task 35b5839e-754b-46bb-bd8e-cf2034767f59 (memcached) has been started and output is visible here. 2026-04-06 02:38:50.998146 | orchestrator | 2026-04-06 02:38:50.998260 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 02:38:50.998274 | orchestrator | 2026-04-06 02:38:50.998285 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 02:38:50.998295 | orchestrator | Monday 06 April 2026 02:38:42 +0000 (0:00:00.303) 0:00:00.303 ********** 2026-04-06 02:38:50.998305 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:38:50.998314 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:38:50.998323 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:38:50.998332 | orchestrator | 2026-04-06 02:38:50.998341 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 02:38:50.998351 | orchestrator | Monday 06 April 2026 02:38:43 +0000 (0:00:00.361) 0:00:00.664 ********** 2026-04-06 02:38:50.998360 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-06 02:38:50.998370 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-06 02:38:50.998378 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-06 02:38:50.998387 | orchestrator | 2026-04-06 02:38:50.998396 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-06 02:38:50.998428 | orchestrator | 2026-04-06 02:38:50.998437 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-06 02:38:50.998446 | orchestrator | Monday 06 April 2026 02:38:43 +0000 (0:00:00.470) 0:00:01.134 ********** 2026-04-06 02:38:50.998456 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:38:50.998465 | orchestrator | 2026-04-06 02:38:50.998474 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-06 02:38:50.998482 | orchestrator | Monday 06 April 2026 02:38:44 +0000 (0:00:00.517) 0:00:01.651 ********** 2026-04-06 02:38:50.998491 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-06 02:38:50.998500 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-06 02:38:50.998509 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-06 02:38:50.998518 | orchestrator | 2026-04-06 02:38:50.998527 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-06 02:38:50.998535 | orchestrator | Monday 06 April 2026 02:38:44 +0000 (0:00:00.702) 0:00:02.354 ********** 2026-04-06 02:38:50.998544 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-06 02:38:50.998553 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-06 02:38:50.998561 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-06 02:38:50.998570 | orchestrator | 2026-04-06 02:38:50.998579 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-06 02:38:50.998588 | orchestrator | Monday 06 April 2026 02:38:46 +0000 (0:00:01.798) 0:00:04.152 ********** 2026-04-06 02:38:50.998609 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:38:50.998676 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:38:50.998689 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:38:50.998702 | orchestrator | 2026-04-06 02:38:50.998716 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-06 02:38:50.998732 | orchestrator | Monday 06 April 2026 02:38:48 +0000 (0:00:01.602) 0:00:05.755 ********** 2026-04-06 02:38:50.998747 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:38:50.998764 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:38:50.998779 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:38:50.998792 | orchestrator | 2026-04-06 02:38:50.998802 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:38:50.998812 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:38:50.998824 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:38:50.998835 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:38:50.998845 | orchestrator | 2026-04-06 02:38:50.998855 | orchestrator | 2026-04-06 02:38:50.998865 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:38:50.998876 | orchestrator | Monday 06 April 2026 02:38:50 +0000 (0:00:02.156) 0:00:07.912 ********** 2026-04-06 02:38:50.998886 | orchestrator | =============================================================================== 2026-04-06 02:38:50.998896 | orchestrator | memcached : Restart memcached container --------------------------------- 2.16s 2026-04-06 02:38:50.998907 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.80s 2026-04-06 02:38:50.998917 | orchestrator | memcached : Check memcached container ----------------------------------- 1.60s 2026-04-06 02:38:50.998926 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.70s 2026-04-06 02:38:50.998935 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.52s 2026-04-06 02:38:50.998944 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-04-06 02:38:50.998963 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-04-06 02:38:53.615182 | orchestrator | 2026-04-06 02:38:53 | INFO  | Task 5761d59a-543c-4247-865d-7ff0d003cbf6 (redis) was prepared for execution. 2026-04-06 02:38:53.615258 | orchestrator | 2026-04-06 02:38:53 | INFO  | It takes a moment until task 5761d59a-543c-4247-865d-7ff0d003cbf6 (redis) has been started and output is visible here. 2026-04-06 02:39:03.197492 | orchestrator | 2026-04-06 02:39:03.197596 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 02:39:03.197610 | orchestrator | 2026-04-06 02:39:03.197705 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 02:39:03.197725 | orchestrator | Monday 06 April 2026 02:38:58 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-04-06 02:39:03.197738 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:39:03.197753 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:39:03.197766 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:39:03.197782 | orchestrator | 2026-04-06 02:39:03.197797 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 02:39:03.197811 | orchestrator | Monday 06 April 2026 02:38:58 +0000 (0:00:00.326) 0:00:00.610 ********** 2026-04-06 02:39:03.197823 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-06 02:39:03.197837 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-06 02:39:03.197851 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-06 02:39:03.197864 | orchestrator | 2026-04-06 02:39:03.197878 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-06 02:39:03.197892 | orchestrator | 2026-04-06 02:39:03.197902 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-06 02:39:03.197911 | orchestrator | Monday 06 April 2026 02:38:59 +0000 (0:00:00.532) 0:00:01.143 ********** 2026-04-06 02:39:03.197919 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:39:03.197928 | orchestrator | 2026-04-06 02:39:03.197936 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-06 02:39:03.197944 | orchestrator | Monday 06 April 2026 02:38:59 +0000 (0:00:00.528) 0:00:01.672 ********** 2026-04-06 02:39:03.197956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:03.197970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:03.197980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:03.198012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:03.198093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:03.198105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:03.198115 | orchestrator | 2026-04-06 02:39:03.198124 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-06 02:39:03.198135 | orchestrator | Monday 06 April 2026 02:39:00 +0000 (0:00:01.064) 0:00:02.736 ********** 2026-04-06 02:39:03.198145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:03.198244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:03.198265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:03.198285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:03.198302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.058938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.059087 | orchestrator | 2026-04-06 02:39:07.059116 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-06 02:39:07.059133 | orchestrator | Monday 06 April 2026 02:39:03 +0000 (0:00:02.490) 0:00:05.227 ********** 2026-04-06 02:39:07.060053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060329 | orchestrator | 2026-04-06 02:39:07.060349 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-06 02:39:07.060369 | orchestrator | Monday 06 April 2026 02:39:05 +0000 (0:00:02.273) 0:00:07.501 ********** 2026-04-06 02:39:07.060386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:07.060514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 02:39:18.761907 | orchestrator | 2026-04-06 02:39:18.762010 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-06 02:39:18.762055 | orchestrator | Monday 06 April 2026 02:39:06 +0000 (0:00:01.356) 0:00:08.857 ********** 2026-04-06 02:39:18.762062 | orchestrator | 2026-04-06 02:39:18.762068 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-06 02:39:18.762074 | orchestrator | Monday 06 April 2026 02:39:06 +0000 (0:00:00.077) 0:00:08.935 ********** 2026-04-06 02:39:18.762081 | orchestrator | 2026-04-06 02:39:18.762107 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-06 02:39:18.762114 | orchestrator | Monday 06 April 2026 02:39:06 +0000 (0:00:00.066) 0:00:09.001 ********** 2026-04-06 02:39:18.762120 | orchestrator | 2026-04-06 02:39:18.762126 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-06 02:39:18.762133 | orchestrator | Monday 06 April 2026 02:39:07 +0000 (0:00:00.085) 0:00:09.087 ********** 2026-04-06 02:39:18.762140 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:39:18.762147 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:39:18.762153 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:39:18.762159 | orchestrator | 2026-04-06 02:39:18.762165 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-06 02:39:18.762171 | orchestrator | Monday 06 April 2026 02:39:09 +0000 (0:00:02.951) 0:00:12.038 ********** 2026-04-06 02:39:18.762201 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:39:18.762208 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:39:18.762213 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:39:18.762219 | orchestrator | 2026-04-06 02:39:18.762226 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:39:18.762232 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:39:18.762240 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:39:18.762259 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:39:18.762265 | orchestrator | 2026-04-06 02:39:18.762271 | orchestrator | 2026-04-06 02:39:18.762277 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:39:18.762283 | orchestrator | Monday 06 April 2026 02:39:18 +0000 (0:00:08.348) 0:00:20.387 ********** 2026-04-06 02:39:18.762289 | orchestrator | =============================================================================== 2026-04-06 02:39:18.762294 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.35s 2026-04-06 02:39:18.762300 | orchestrator | redis : Restart redis container ----------------------------------------- 2.95s 2026-04-06 02:39:18.762306 | orchestrator | redis : Copying over default config.json files -------------------------- 2.49s 2026-04-06 02:39:18.762312 | orchestrator | redis : Copying over redis config files --------------------------------- 2.27s 2026-04-06 02:39:18.762317 | orchestrator | redis : Check redis containers ------------------------------------------ 1.36s 2026-04-06 02:39:18.762323 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.06s 2026-04-06 02:39:18.762329 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-04-06 02:39:18.762335 | orchestrator | redis : include_tasks --------------------------------------------------- 0.53s 2026-04-06 02:39:18.762342 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-06 02:39:18.762347 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.23s 2026-04-06 02:39:21.307937 | orchestrator | 2026-04-06 02:39:21 | INFO  | Task 3a9c0a5a-4d70-46e2-9eca-0e10ca375298 (mariadb) was prepared for execution. 2026-04-06 02:39:21.308044 | orchestrator | 2026-04-06 02:39:21 | INFO  | It takes a moment until task 3a9c0a5a-4d70-46e2-9eca-0e10ca375298 (mariadb) has been started and output is visible here. 2026-04-06 02:39:35.810338 | orchestrator | 2026-04-06 02:39:35.810452 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 02:39:35.810470 | orchestrator | 2026-04-06 02:39:35.810482 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 02:39:35.810494 | orchestrator | Monday 06 April 2026 02:39:25 +0000 (0:00:00.192) 0:00:00.192 ********** 2026-04-06 02:39:35.810505 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:39:35.810517 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:39:35.810528 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:39:35.810538 | orchestrator | 2026-04-06 02:39:35.810548 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 02:39:35.810560 | orchestrator | Monday 06 April 2026 02:39:26 +0000 (0:00:00.319) 0:00:00.511 ********** 2026-04-06 02:39:35.810570 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-06 02:39:35.810581 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-06 02:39:35.810592 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-06 02:39:35.810602 | orchestrator | 2026-04-06 02:39:35.810661 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-06 02:39:35.810670 | orchestrator | 2026-04-06 02:39:35.810676 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-06 02:39:35.810702 | orchestrator | Monday 06 April 2026 02:39:26 +0000 (0:00:00.598) 0:00:01.110 ********** 2026-04-06 02:39:35.810709 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 02:39:35.810715 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-06 02:39:35.810722 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-06 02:39:35.810728 | orchestrator | 2026-04-06 02:39:35.810735 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-06 02:39:35.810742 | orchestrator | Monday 06 April 2026 02:39:27 +0000 (0:00:00.392) 0:00:01.503 ********** 2026-04-06 02:39:35.810749 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:39:35.810756 | orchestrator | 2026-04-06 02:39:35.810762 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-06 02:39:35.810769 | orchestrator | Monday 06 April 2026 02:39:27 +0000 (0:00:00.529) 0:00:02.032 ********** 2026-04-06 02:39:35.810794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 02:39:35.810825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 02:39:35.810844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 02:39:35.810852 | orchestrator | 2026-04-06 02:39:35.810858 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-06 02:39:35.810865 | orchestrator | Monday 06 April 2026 02:39:30 +0000 (0:00:02.751) 0:00:04.784 ********** 2026-04-06 02:39:35.810871 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:39:35.810879 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:39:35.810885 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:39:35.810892 | orchestrator | 2026-04-06 02:39:35.810898 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-06 02:39:35.810905 | orchestrator | Monday 06 April 2026 02:39:31 +0000 (0:00:00.698) 0:00:05.482 ********** 2026-04-06 02:39:35.810911 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:39:35.810917 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:39:35.810924 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:39:35.810930 | orchestrator | 2026-04-06 02:39:35.810937 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-06 02:39:35.810943 | orchestrator | Monday 06 April 2026 02:39:32 +0000 (0:00:01.508) 0:00:06.991 ********** 2026-04-06 02:39:35.810956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 02:39:43.986522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 02:39:43.986701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 02:39:43.986762 | orchestrator | 2026-04-06 02:39:43.986785 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-06 02:39:43.986806 | orchestrator | Monday 06 April 2026 02:39:35 +0000 (0:00:03.133) 0:00:10.124 ********** 2026-04-06 02:39:43.986825 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:39:43.986844 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:39:43.986861 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:39:43.986880 | orchestrator | 2026-04-06 02:39:43.986900 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-06 02:39:43.986945 | orchestrator | Monday 06 April 2026 02:39:36 +0000 (0:00:01.130) 0:00:11.255 ********** 2026-04-06 02:39:43.986964 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:39:43.986984 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:39:43.987003 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:39:43.987022 | orchestrator | 2026-04-06 02:39:43.987040 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-06 02:39:43.987053 | orchestrator | Monday 06 April 2026 02:39:40 +0000 (0:00:03.903) 0:00:15.159 ********** 2026-04-06 02:39:43.987066 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:39:43.987079 | orchestrator | 2026-04-06 02:39:43.987091 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-06 02:39:43.987104 | orchestrator | Monday 06 April 2026 02:39:41 +0000 (0:00:00.607) 0:00:15.767 ********** 2026-04-06 02:39:43.987131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:39:43.987158 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:39:43.987185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:39:49.389391 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:39:49.389539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:39:49.389606 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:39:49.389747 | orchestrator | 2026-04-06 02:39:49.389770 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-06 02:39:49.389789 | orchestrator | Monday 06 April 2026 02:39:43 +0000 (0:00:02.532) 0:00:18.300 ********** 2026-04-06 02:39:49.389809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:39:49.389822 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:39:49.389866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:39:49.389893 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:39:49.389908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:39:49.389922 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:39:49.389935 | orchestrator | 2026-04-06 02:39:49.389948 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-06 02:39:49.389961 | orchestrator | Monday 06 April 2026 02:39:46 +0000 (0:00:02.714) 0:00:21.014 ********** 2026-04-06 02:39:49.389990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:39:52.444177 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:39:52.444305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:39:52.444321 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:39:52.444343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 02:39:52.444376 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:39:52.444384 | orchestrator | 2026-04-06 02:39:52.444392 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-06 02:39:52.444400 | orchestrator | Monday 06 April 2026 02:39:49 +0000 (0:00:02.693) 0:00:23.707 ********** 2026-04-06 02:39:52.444422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 02:39:52.444431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 02:39:52.444450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 02:42:11.276322 | orchestrator | 2026-04-06 02:42:11.276418 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-06 02:42:11.276429 | orchestrator | Monday 06 April 2026 02:39:52 +0000 (0:00:03.054) 0:00:26.762 ********** 2026-04-06 02:42:11.276436 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:42:11.276444 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:42:11.276450 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:42:11.276456 | orchestrator | 2026-04-06 02:42:11.276463 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-06 02:42:11.276469 | orchestrator | Monday 06 April 2026 02:39:53 +0000 (0:00:00.831) 0:00:27.593 ********** 2026-04-06 02:42:11.276475 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:11.276482 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:42:11.276488 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:42:11.276493 | orchestrator | 2026-04-06 02:42:11.276499 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-06 02:42:11.276505 | orchestrator | Monday 06 April 2026 02:39:53 +0000 (0:00:00.609) 0:00:28.203 ********** 2026-04-06 02:42:11.276511 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:11.276517 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:42:11.276523 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:42:11.276529 | orchestrator | 2026-04-06 02:42:11.276535 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-06 02:42:11.276541 | orchestrator | Monday 06 April 2026 02:39:54 +0000 (0:00:00.348) 0:00:28.551 ********** 2026-04-06 02:42:11.276548 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-06 02:42:11.276555 | orchestrator | ...ignoring 2026-04-06 02:42:11.276561 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-06 02:42:11.276567 | orchestrator | ...ignoring 2026-04-06 02:42:11.276573 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-06 02:42:11.276579 | orchestrator | ...ignoring 2026-04-06 02:42:11.276608 | orchestrator | 2026-04-06 02:42:11.276614 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-06 02:42:11.276620 | orchestrator | Monday 06 April 2026 02:40:05 +0000 (0:00:10.890) 0:00:39.441 ********** 2026-04-06 02:42:11.276626 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:11.276632 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:42:11.276638 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:42:11.276644 | orchestrator | 2026-04-06 02:42:11.276663 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-06 02:42:11.276669 | orchestrator | Monday 06 April 2026 02:40:05 +0000 (0:00:00.473) 0:00:39.914 ********** 2026-04-06 02:42:11.276704 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:11.276713 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:11.276722 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:11.276729 | orchestrator | 2026-04-06 02:42:11.276734 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-06 02:42:11.276740 | orchestrator | Monday 06 April 2026 02:40:06 +0000 (0:00:00.724) 0:00:40.638 ********** 2026-04-06 02:42:11.276746 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:11.276752 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:11.276758 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:11.276764 | orchestrator | 2026-04-06 02:42:11.276782 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-06 02:42:11.276788 | orchestrator | Monday 06 April 2026 02:40:06 +0000 (0:00:00.481) 0:00:41.120 ********** 2026-04-06 02:42:11.276794 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:11.276800 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:11.276806 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:11.276812 | orchestrator | 2026-04-06 02:42:11.276818 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-06 02:42:11.276823 | orchestrator | Monday 06 April 2026 02:40:07 +0000 (0:00:00.494) 0:00:41.614 ********** 2026-04-06 02:42:11.276829 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:11.276835 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:42:11.276841 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:42:11.276847 | orchestrator | 2026-04-06 02:42:11.276853 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-06 02:42:11.276859 | orchestrator | Monday 06 April 2026 02:40:07 +0000 (0:00:00.472) 0:00:42.087 ********** 2026-04-06 02:42:11.276865 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:11.276871 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:11.276877 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:11.276883 | orchestrator | 2026-04-06 02:42:11.276891 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-06 02:42:11.276898 | orchestrator | Monday 06 April 2026 02:40:08 +0000 (0:00:00.653) 0:00:42.740 ********** 2026-04-06 02:42:11.276905 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:11.276912 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:11.276918 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-06 02:42:11.276925 | orchestrator | 2026-04-06 02:42:11.276932 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-06 02:42:11.276938 | orchestrator | Monday 06 April 2026 02:40:08 +0000 (0:00:00.490) 0:00:43.231 ********** 2026-04-06 02:42:11.276945 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:42:11.276952 | orchestrator | 2026-04-06 02:42:11.276959 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-06 02:42:11.276966 | orchestrator | Monday 06 April 2026 02:40:19 +0000 (0:00:10.409) 0:00:53.640 ********** 2026-04-06 02:42:11.276973 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:11.276980 | orchestrator | 2026-04-06 02:42:11.276986 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-06 02:42:11.276992 | orchestrator | Monday 06 April 2026 02:40:19 +0000 (0:00:00.141) 0:00:53.781 ********** 2026-04-06 02:42:11.276998 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:11.277025 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:11.277032 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:11.277038 | orchestrator | 2026-04-06 02:42:11.277044 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-06 02:42:11.277050 | orchestrator | Monday 06 April 2026 02:40:20 +0000 (0:00:01.144) 0:00:54.926 ********** 2026-04-06 02:42:11.277056 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:42:11.277062 | orchestrator | 2026-04-06 02:42:11.277068 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-06 02:42:11.277074 | orchestrator | Monday 06 April 2026 02:40:29 +0000 (0:00:08.480) 0:01:03.406 ********** 2026-04-06 02:42:11.277080 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:11.277085 | orchestrator | 2026-04-06 02:42:11.277091 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-06 02:42:11.277097 | orchestrator | Monday 06 April 2026 02:40:30 +0000 (0:00:01.708) 0:01:05.115 ********** 2026-04-06 02:42:11.277103 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:11.277109 | orchestrator | 2026-04-06 02:42:11.277115 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-06 02:42:11.277121 | orchestrator | Monday 06 April 2026 02:40:33 +0000 (0:00:02.685) 0:01:07.801 ********** 2026-04-06 02:42:11.277127 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:42:11.277132 | orchestrator | 2026-04-06 02:42:11.277138 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-06 02:42:11.277144 | orchestrator | Monday 06 April 2026 02:40:33 +0000 (0:00:00.145) 0:01:07.946 ********** 2026-04-06 02:42:11.277151 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:11.277160 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:11.277173 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:11.277186 | orchestrator | 2026-04-06 02:42:11.277195 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-06 02:42:11.277204 | orchestrator | Monday 06 April 2026 02:40:33 +0000 (0:00:00.332) 0:01:08.279 ********** 2026-04-06 02:42:11.277213 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:11.277222 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-06 02:42:11.277230 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:42:11.277239 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:42:11.277248 | orchestrator | 2026-04-06 02:42:11.277257 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-06 02:42:11.277266 | orchestrator | skipping: no hosts matched 2026-04-06 02:42:11.277276 | orchestrator | 2026-04-06 02:42:11.277285 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-06 02:42:11.277295 | orchestrator | 2026-04-06 02:42:11.277305 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-06 02:42:11.277313 | orchestrator | Monday 06 April 2026 02:40:34 +0000 (0:00:00.601) 0:01:08.880 ********** 2026-04-06 02:42:11.277323 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:42:11.277332 | orchestrator | 2026-04-06 02:42:11.277338 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-06 02:42:11.277343 | orchestrator | Monday 06 April 2026 02:40:53 +0000 (0:00:19.291) 0:01:28.171 ********** 2026-04-06 02:42:11.277349 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:42:11.277355 | orchestrator | 2026-04-06 02:42:11.277361 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-06 02:42:11.277367 | orchestrator | Monday 06 April 2026 02:41:10 +0000 (0:00:16.610) 0:01:44.782 ********** 2026-04-06 02:42:11.277373 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:42:11.277378 | orchestrator | 2026-04-06 02:42:11.277388 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-06 02:42:11.277394 | orchestrator | 2026-04-06 02:42:11.277405 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-06 02:42:11.277411 | orchestrator | Monday 06 April 2026 02:41:13 +0000 (0:00:02.586) 0:01:47.368 ********** 2026-04-06 02:42:11.277424 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:42:11.277430 | orchestrator | 2026-04-06 02:42:11.277436 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-06 02:42:11.277442 | orchestrator | Monday 06 April 2026 02:41:32 +0000 (0:00:19.123) 0:02:06.491 ********** 2026-04-06 02:42:11.277447 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:42:11.277453 | orchestrator | 2026-04-06 02:42:11.277459 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-06 02:42:11.277464 | orchestrator | Monday 06 April 2026 02:41:47 +0000 (0:00:15.545) 0:02:22.037 ********** 2026-04-06 02:42:11.277470 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:42:11.277476 | orchestrator | 2026-04-06 02:42:11.277482 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-06 02:42:11.277487 | orchestrator | 2026-04-06 02:42:11.277493 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-06 02:42:11.277499 | orchestrator | Monday 06 April 2026 02:41:50 +0000 (0:00:02.673) 0:02:24.710 ********** 2026-04-06 02:42:11.277505 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:42:11.277511 | orchestrator | 2026-04-06 02:42:11.277516 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-06 02:42:11.277522 | orchestrator | Monday 06 April 2026 02:42:03 +0000 (0:00:12.709) 0:02:37.419 ********** 2026-04-06 02:42:11.277528 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:11.277534 | orchestrator | 2026-04-06 02:42:11.277539 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-06 02:42:11.277545 | orchestrator | Monday 06 April 2026 02:42:07 +0000 (0:00:04.549) 0:02:41.969 ********** 2026-04-06 02:42:11.277551 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:11.277556 | orchestrator | 2026-04-06 02:42:11.277562 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-06 02:42:11.277568 | orchestrator | 2026-04-06 02:42:11.277574 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-06 02:42:11.277580 | orchestrator | Monday 06 April 2026 02:42:10 +0000 (0:00:02.853) 0:02:44.823 ********** 2026-04-06 02:42:11.277585 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:42:11.277591 | orchestrator | 2026-04-06 02:42:11.277597 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-06 02:42:11.277609 | orchestrator | Monday 06 April 2026 02:42:11 +0000 (0:00:00.770) 0:02:45.594 ********** 2026-04-06 02:42:23.770223 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:23.770330 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:23.770344 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:42:23.770354 | orchestrator | 2026-04-06 02:42:23.770364 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-06 02:42:23.770375 | orchestrator | Monday 06 April 2026 02:42:13 +0000 (0:00:02.322) 0:02:47.917 ********** 2026-04-06 02:42:23.770384 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:23.770393 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:23.770402 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:42:23.770410 | orchestrator | 2026-04-06 02:42:23.770419 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-06 02:42:23.770428 | orchestrator | Monday 06 April 2026 02:42:15 +0000 (0:00:01.945) 0:02:49.862 ********** 2026-04-06 02:42:23.770437 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:23.770446 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:23.770455 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:42:23.770463 | orchestrator | 2026-04-06 02:42:23.770471 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-06 02:42:23.770480 | orchestrator | Monday 06 April 2026 02:42:17 +0000 (0:00:02.209) 0:02:52.072 ********** 2026-04-06 02:42:23.770489 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:23.770497 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:23.770505 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:42:23.770537 | orchestrator | 2026-04-06 02:42:23.770547 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-06 02:42:23.770555 | orchestrator | Monday 06 April 2026 02:42:19 +0000 (0:00:02.117) 0:02:54.190 ********** 2026-04-06 02:42:23.770563 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:23.770573 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:42:23.770580 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:42:23.770588 | orchestrator | 2026-04-06 02:42:23.770597 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-06 02:42:23.770605 | orchestrator | Monday 06 April 2026 02:42:22 +0000 (0:00:03.030) 0:02:57.220 ********** 2026-04-06 02:42:23.770613 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:23.770621 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:42:23.770629 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:42:23.770637 | orchestrator | 2026-04-06 02:42:23.770644 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:42:23.770654 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-06 02:42:23.770665 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-06 02:42:23.770674 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-06 02:42:23.770683 | orchestrator | 2026-04-06 02:42:23.770750 | orchestrator | 2026-04-06 02:42:23.770759 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:42:23.770767 | orchestrator | Monday 06 April 2026 02:42:23 +0000 (0:00:00.479) 0:02:57.700 ********** 2026-04-06 02:42:23.770777 | orchestrator | =============================================================================== 2026-04-06 02:42:23.770802 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.41s 2026-04-06 02:42:23.770812 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.16s 2026-04-06 02:42:23.770822 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.71s 2026-04-06 02:42:23.770831 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2026-04-06 02:42:23.770841 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.41s 2026-04-06 02:42:23.770851 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.48s 2026-04-06 02:42:23.770861 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.26s 2026-04-06 02:42:23.770870 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.55s 2026-04-06 02:42:23.770880 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.90s 2026-04-06 02:42:23.770890 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.13s 2026-04-06 02:42:23.770900 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.05s 2026-04-06 02:42:23.770910 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.03s 2026-04-06 02:42:23.770920 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.85s 2026-04-06 02:42:23.770930 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.75s 2026-04-06 02:42:23.770940 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.71s 2026-04-06 02:42:23.770952 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.69s 2026-04-06 02:42:23.770961 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.69s 2026-04-06 02:42:23.770971 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.53s 2026-04-06 02:42:23.770980 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.32s 2026-04-06 02:42:23.771003 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.21s 2026-04-06 02:42:26.343169 | orchestrator | 2026-04-06 02:42:26 | INFO  | Task ceb82a8d-397a-4ab2-8c7b-05d9c4a484bf (rabbitmq) was prepared for execution. 2026-04-06 02:42:26.343261 | orchestrator | 2026-04-06 02:42:26 | INFO  | It takes a moment until task ceb82a8d-397a-4ab2-8c7b-05d9c4a484bf (rabbitmq) has been started and output is visible here. 2026-04-06 02:42:40.662503 | orchestrator | 2026-04-06 02:42:40.662614 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 02:42:40.662630 | orchestrator | 2026-04-06 02:42:40.662641 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 02:42:40.662651 | orchestrator | Monday 06 April 2026 02:42:30 +0000 (0:00:00.186) 0:00:00.186 ********** 2026-04-06 02:42:40.662662 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:40.662673 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:42:40.662682 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:42:40.662692 | orchestrator | 2026-04-06 02:42:40.662754 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 02:42:40.662768 | orchestrator | Monday 06 April 2026 02:42:31 +0000 (0:00:00.319) 0:00:00.505 ********** 2026-04-06 02:42:40.662778 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-06 02:42:40.662788 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-06 02:42:40.662805 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-06 02:42:40.662829 | orchestrator | 2026-04-06 02:42:40.662849 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-06 02:42:40.662865 | orchestrator | 2026-04-06 02:42:40.662881 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-06 02:42:40.662898 | orchestrator | Monday 06 April 2026 02:42:31 +0000 (0:00:00.632) 0:00:01.137 ********** 2026-04-06 02:42:40.662915 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:42:40.662933 | orchestrator | 2026-04-06 02:42:40.662951 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-06 02:42:40.662968 | orchestrator | Monday 06 April 2026 02:42:32 +0000 (0:00:00.562) 0:00:01.700 ********** 2026-04-06 02:42:40.662987 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:40.663007 | orchestrator | 2026-04-06 02:42:40.663027 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-06 02:42:40.663045 | orchestrator | Monday 06 April 2026 02:42:33 +0000 (0:00:00.992) 0:00:02.692 ********** 2026-04-06 02:42:40.663058 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:40.663071 | orchestrator | 2026-04-06 02:42:40.663083 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-06 02:42:40.663095 | orchestrator | Monday 06 April 2026 02:42:33 +0000 (0:00:00.418) 0:00:03.111 ********** 2026-04-06 02:42:40.663107 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:40.663118 | orchestrator | 2026-04-06 02:42:40.663130 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-06 02:42:40.663142 | orchestrator | Monday 06 April 2026 02:42:34 +0000 (0:00:00.402) 0:00:03.513 ********** 2026-04-06 02:42:40.663153 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:40.663164 | orchestrator | 2026-04-06 02:42:40.663176 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-06 02:42:40.663187 | orchestrator | Monday 06 April 2026 02:42:34 +0000 (0:00:00.379) 0:00:03.893 ********** 2026-04-06 02:42:40.663199 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:40.663211 | orchestrator | 2026-04-06 02:42:40.663222 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-06 02:42:40.663254 | orchestrator | Monday 06 April 2026 02:42:35 +0000 (0:00:00.601) 0:00:04.494 ********** 2026-04-06 02:42:40.663267 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:42:40.663299 | orchestrator | 2026-04-06 02:42:40.663311 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-06 02:42:40.663324 | orchestrator | Monday 06 April 2026 02:42:36 +0000 (0:00:01.022) 0:00:05.517 ********** 2026-04-06 02:42:40.663335 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:42:40.663345 | orchestrator | 2026-04-06 02:42:40.663355 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-06 02:42:40.663364 | orchestrator | Monday 06 April 2026 02:42:37 +0000 (0:00:00.876) 0:00:06.394 ********** 2026-04-06 02:42:40.663374 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:40.663384 | orchestrator | 2026-04-06 02:42:40.663394 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-06 02:42:40.663404 | orchestrator | Monday 06 April 2026 02:42:37 +0000 (0:00:00.438) 0:00:06.832 ********** 2026-04-06 02:42:40.663414 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:42:40.663424 | orchestrator | 2026-04-06 02:42:40.663433 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-06 02:42:40.663443 | orchestrator | Monday 06 April 2026 02:42:38 +0000 (0:00:00.456) 0:00:07.289 ********** 2026-04-06 02:42:40.663480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:42:40.663495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:42:40.663514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:42:40.663533 | orchestrator | 2026-04-06 02:42:40.663543 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-06 02:42:40.663553 | orchestrator | Monday 06 April 2026 02:42:38 +0000 (0:00:00.892) 0:00:08.181 ********** 2026-04-06 02:42:40.663564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:42:40.663583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:43:00.076509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:43:00.076629 | orchestrator | 2026-04-06 02:43:00.076644 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-06 02:43:00.076678 | orchestrator | Monday 06 April 2026 02:42:40 +0000 (0:00:01.756) 0:00:09.938 ********** 2026-04-06 02:43:00.076689 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-06 02:43:00.076701 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-06 02:43:00.076711 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-06 02:43:00.076770 | orchestrator | 2026-04-06 02:43:00.076784 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-06 02:43:00.076795 | orchestrator | Monday 06 April 2026 02:42:42 +0000 (0:00:01.503) 0:00:11.442 ********** 2026-04-06 02:43:00.076822 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-06 02:43:00.076835 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-06 02:43:00.076844 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-06 02:43:00.076854 | orchestrator | 2026-04-06 02:43:00.076865 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-06 02:43:00.076875 | orchestrator | Monday 06 April 2026 02:42:43 +0000 (0:00:01.806) 0:00:13.249 ********** 2026-04-06 02:43:00.076884 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-06 02:43:00.076894 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-06 02:43:00.076905 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-06 02:43:00.076915 | orchestrator | 2026-04-06 02:43:00.076926 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-06 02:43:00.076937 | orchestrator | Monday 06 April 2026 02:42:45 +0000 (0:00:01.422) 0:00:14.672 ********** 2026-04-06 02:43:00.076947 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-06 02:43:00.076970 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-06 02:43:00.076989 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-06 02:43:00.077000 | orchestrator | 2026-04-06 02:43:00.077011 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-06 02:43:00.077023 | orchestrator | Monday 06 April 2026 02:42:47 +0000 (0:00:01.774) 0:00:16.446 ********** 2026-04-06 02:43:00.077033 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-06 02:43:00.077044 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-06 02:43:00.077055 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-06 02:43:00.077066 | orchestrator | 2026-04-06 02:43:00.077077 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-06 02:43:00.077089 | orchestrator | Monday 06 April 2026 02:42:48 +0000 (0:00:01.507) 0:00:17.954 ********** 2026-04-06 02:43:00.077098 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-06 02:43:00.077106 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-06 02:43:00.077114 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-06 02:43:00.077121 | orchestrator | 2026-04-06 02:43:00.077129 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-06 02:43:00.077137 | orchestrator | Monday 06 April 2026 02:42:50 +0000 (0:00:01.493) 0:00:19.447 ********** 2026-04-06 02:43:00.077145 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:43:00.077153 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:43:00.077177 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:43:00.077194 | orchestrator | 2026-04-06 02:43:00.077202 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-06 02:43:00.077210 | orchestrator | Monday 06 April 2026 02:42:50 +0000 (0:00:00.443) 0:00:19.891 ********** 2026-04-06 02:43:00.077220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:43:00.077235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:43:00.077243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 02:43:00.077250 | orchestrator | 2026-04-06 02:43:00.077257 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-06 02:43:00.077263 | orchestrator | Monday 06 April 2026 02:42:51 +0000 (0:00:01.251) 0:00:21.143 ********** 2026-04-06 02:43:00.077269 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:43:00.077276 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:43:00.077282 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:43:00.077288 | orchestrator | 2026-04-06 02:43:00.077296 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-06 02:43:00.077314 | orchestrator | Monday 06 April 2026 02:42:52 +0000 (0:00:00.819) 0:00:21.963 ********** 2026-04-06 02:43:00.077324 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:43:00.077334 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:43:00.077345 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:43:00.077356 | orchestrator | 2026-04-06 02:43:00.077366 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-06 02:43:00.077382 | orchestrator | Monday 06 April 2026 02:43:00 +0000 (0:00:07.388) 0:00:29.351 ********** 2026-04-06 02:44:35.393782 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:44:35.393944 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:44:35.393958 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:44:35.393965 | orchestrator | 2026-04-06 02:44:35.393973 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-06 02:44:35.393981 | orchestrator | 2026-04-06 02:44:35.393988 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-06 02:44:35.393995 | orchestrator | Monday 06 April 2026 02:43:00 +0000 (0:00:00.562) 0:00:29.914 ********** 2026-04-06 02:44:35.394002 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:44:35.394009 | orchestrator | 2026-04-06 02:44:35.394060 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-06 02:44:35.394068 | orchestrator | Monday 06 April 2026 02:43:01 +0000 (0:00:00.594) 0:00:30.509 ********** 2026-04-06 02:44:35.394074 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:44:35.394081 | orchestrator | 2026-04-06 02:44:35.394088 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-06 02:44:35.394095 | orchestrator | Monday 06 April 2026 02:43:01 +0000 (0:00:00.269) 0:00:30.778 ********** 2026-04-06 02:44:35.394102 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:44:35.394109 | orchestrator | 2026-04-06 02:44:35.394115 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-06 02:44:35.394122 | orchestrator | Monday 06 April 2026 02:43:08 +0000 (0:00:06.627) 0:00:37.405 ********** 2026-04-06 02:44:35.394128 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:44:35.394135 | orchestrator | 2026-04-06 02:44:35.394142 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-06 02:44:35.394149 | orchestrator | 2026-04-06 02:44:35.394156 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-06 02:44:35.394163 | orchestrator | Monday 06 April 2026 02:43:56 +0000 (0:00:48.754) 0:01:26.160 ********** 2026-04-06 02:44:35.394170 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:44:35.394177 | orchestrator | 2026-04-06 02:44:35.394183 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-06 02:44:35.394190 | orchestrator | Monday 06 April 2026 02:43:57 +0000 (0:00:00.604) 0:01:26.765 ********** 2026-04-06 02:44:35.394198 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:44:35.394205 | orchestrator | 2026-04-06 02:44:35.394213 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-06 02:44:35.394220 | orchestrator | Monday 06 April 2026 02:43:57 +0000 (0:00:00.254) 0:01:27.020 ********** 2026-04-06 02:44:35.394227 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:44:35.394234 | orchestrator | 2026-04-06 02:44:35.394241 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-06 02:44:35.394265 | orchestrator | Monday 06 April 2026 02:43:59 +0000 (0:00:01.703) 0:01:28.723 ********** 2026-04-06 02:44:35.394272 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:44:35.394279 | orchestrator | 2026-04-06 02:44:35.394286 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-06 02:44:35.394293 | orchestrator | 2026-04-06 02:44:35.394299 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-06 02:44:35.394306 | orchestrator | Monday 06 April 2026 02:44:13 +0000 (0:00:14.001) 0:01:42.725 ********** 2026-04-06 02:44:35.394313 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:44:35.394319 | orchestrator | 2026-04-06 02:44:35.394348 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-06 02:44:35.394355 | orchestrator | Monday 06 April 2026 02:44:14 +0000 (0:00:00.754) 0:01:43.479 ********** 2026-04-06 02:44:35.394362 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:44:35.394369 | orchestrator | 2026-04-06 02:44:35.394376 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-06 02:44:35.394383 | orchestrator | Monday 06 April 2026 02:44:14 +0000 (0:00:00.244) 0:01:43.723 ********** 2026-04-06 02:44:35.394390 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:44:35.394397 | orchestrator | 2026-04-06 02:44:35.394405 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-06 02:44:35.394411 | orchestrator | Monday 06 April 2026 02:44:16 +0000 (0:00:01.586) 0:01:45.310 ********** 2026-04-06 02:44:35.394418 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:44:35.394424 | orchestrator | 2026-04-06 02:44:35.394431 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-06 02:44:35.394437 | orchestrator | 2026-04-06 02:44:35.394444 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-06 02:44:35.394451 | orchestrator | Monday 06 April 2026 02:44:31 +0000 (0:00:15.688) 0:02:00.999 ********** 2026-04-06 02:44:35.394457 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:44:35.394465 | orchestrator | 2026-04-06 02:44:35.394472 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-06 02:44:35.394478 | orchestrator | Monday 06 April 2026 02:44:32 +0000 (0:00:00.548) 0:02:01.547 ********** 2026-04-06 02:44:35.394486 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-06 02:44:35.394493 | orchestrator | enable_outward_rabbitmq_True 2026-04-06 02:44:35.394500 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-06 02:44:35.394507 | orchestrator | outward_rabbitmq_restart 2026-04-06 02:44:35.394514 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:44:35.394521 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:44:35.394528 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:44:35.394535 | orchestrator | 2026-04-06 02:44:35.394542 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-06 02:44:35.394549 | orchestrator | skipping: no hosts matched 2026-04-06 02:44:35.394555 | orchestrator | 2026-04-06 02:44:35.394561 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-06 02:44:35.394567 | orchestrator | skipping: no hosts matched 2026-04-06 02:44:35.394573 | orchestrator | 2026-04-06 02:44:35.394580 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-06 02:44:35.394586 | orchestrator | skipping: no hosts matched 2026-04-06 02:44:35.394609 | orchestrator | 2026-04-06 02:44:35.394622 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:44:35.394651 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-06 02:44:35.394659 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:44:35.394666 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:44:35.394672 | orchestrator | 2026-04-06 02:44:35.394678 | orchestrator | 2026-04-06 02:44:35.394684 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:44:35.394689 | orchestrator | Monday 06 April 2026 02:44:34 +0000 (0:00:02.714) 0:02:04.262 ********** 2026-04-06 02:44:35.394696 | orchestrator | =============================================================================== 2026-04-06 02:44:35.394702 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.44s 2026-04-06 02:44:35.394708 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.92s 2026-04-06 02:44:35.394725 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.39s 2026-04-06 02:44:35.394731 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.71s 2026-04-06 02:44:35.394737 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.95s 2026-04-06 02:44:35.394742 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.81s 2026-04-06 02:44:35.394748 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.77s 2026-04-06 02:44:35.394753 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.76s 2026-04-06 02:44:35.394759 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.51s 2026-04-06 02:44:35.394764 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.50s 2026-04-06 02:44:35.394770 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.49s 2026-04-06 02:44:35.394775 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.42s 2026-04-06 02:44:35.394781 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.25s 2026-04-06 02:44:35.394788 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.02s 2026-04-06 02:44:35.394801 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.99s 2026-04-06 02:44:35.394807 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.89s 2026-04-06 02:44:35.394813 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.88s 2026-04-06 02:44:35.394819 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.82s 2026-04-06 02:44:35.394840 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.77s 2026-04-06 02:44:35.394847 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-04-06 02:44:38.168804 | orchestrator | 2026-04-06 02:44:38 | INFO  | Task 57090ee9-342d-403d-83ae-24ae510e3a63 (openvswitch) was prepared for execution. 2026-04-06 02:44:38.168921 | orchestrator | 2026-04-06 02:44:38 | INFO  | It takes a moment until task 57090ee9-342d-403d-83ae-24ae510e3a63 (openvswitch) has been started and output is visible here. 2026-04-06 02:44:52.060388 | orchestrator | 2026-04-06 02:44:52.060494 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 02:44:52.060507 | orchestrator | 2026-04-06 02:44:52.060517 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 02:44:52.060526 | orchestrator | Monday 06 April 2026 02:44:42 +0000 (0:00:00.303) 0:00:00.303 ********** 2026-04-06 02:44:52.060535 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:44:52.060544 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:44:52.060552 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:44:52.060560 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:44:52.060569 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:44:52.060578 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:44:52.060587 | orchestrator | 2026-04-06 02:44:52.060595 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 02:44:52.060602 | orchestrator | Monday 06 April 2026 02:44:43 +0000 (0:00:00.800) 0:00:01.104 ********** 2026-04-06 02:44:52.060609 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 02:44:52.060618 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 02:44:52.060626 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 02:44:52.060635 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 02:44:52.060643 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 02:44:52.060651 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 02:44:52.060684 | orchestrator | 2026-04-06 02:44:52.060693 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-06 02:44:52.060701 | orchestrator | 2026-04-06 02:44:52.060710 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-06 02:44:52.060718 | orchestrator | Monday 06 April 2026 02:44:44 +0000 (0:00:00.719) 0:00:01.824 ********** 2026-04-06 02:44:52.060728 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:44:52.060738 | orchestrator | 2026-04-06 02:44:52.060746 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-06 02:44:52.060754 | orchestrator | Monday 06 April 2026 02:44:45 +0000 (0:00:01.268) 0:00:03.092 ********** 2026-04-06 02:44:52.060762 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-06 02:44:52.060770 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-06 02:44:52.060778 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-06 02:44:52.060786 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-06 02:44:52.060794 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-06 02:44:52.060802 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-06 02:44:52.060809 | orchestrator | 2026-04-06 02:44:52.060817 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-06 02:44:52.060825 | orchestrator | Monday 06 April 2026 02:44:46 +0000 (0:00:01.209) 0:00:04.302 ********** 2026-04-06 02:44:52.060833 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-06 02:44:52.060841 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-06 02:44:52.060879 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-06 02:44:52.060888 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-06 02:44:52.060895 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-06 02:44:52.060904 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-06 02:44:52.060911 | orchestrator | 2026-04-06 02:44:52.060919 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-06 02:44:52.060928 | orchestrator | Monday 06 April 2026 02:44:48 +0000 (0:00:01.439) 0:00:05.741 ********** 2026-04-06 02:44:52.060938 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-06 02:44:52.060947 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:44:52.060958 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-06 02:44:52.060967 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:44:52.060976 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-06 02:44:52.060986 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:44:52.060995 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-06 02:44:52.061005 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:44:52.061015 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-06 02:44:52.061021 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:44:52.061029 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-06 02:44:52.061036 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:44:52.061045 | orchestrator | 2026-04-06 02:44:52.061055 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-06 02:44:52.061064 | orchestrator | Monday 06 April 2026 02:44:49 +0000 (0:00:01.275) 0:00:07.016 ********** 2026-04-06 02:44:52.061074 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:44:52.061084 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:44:52.061094 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:44:52.061105 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:44:52.061112 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:44:52.061120 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:44:52.061129 | orchestrator | 2026-04-06 02:44:52.061137 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-06 02:44:52.061157 | orchestrator | Monday 06 April 2026 02:44:50 +0000 (0:00:00.890) 0:00:07.907 ********** 2026-04-06 02:44:52.061188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:52.061202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:52.061212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:52.061299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:52.061321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:52.061339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411127 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411255 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411296 | orchestrator | 2026-04-06 02:44:54.411303 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-06 02:44:54.411309 | orchestrator | Monday 06 April 2026 02:44:52 +0000 (0:00:01.597) 0:00:09.504 ********** 2026-04-06 02:44:54.411314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:54.411359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229266 | orchestrator | 2026-04-06 02:44:57.229275 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-06 02:44:57.229284 | orchestrator | Monday 06 April 2026 02:44:54 +0000 (0:00:02.351) 0:00:11.856 ********** 2026-04-06 02:44:57.229290 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:44:57.229298 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:44:57.229305 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:44:57.229312 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:44:57.229319 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:44:57.229326 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:44:57.229333 | orchestrator | 2026-04-06 02:44:57.229341 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-06 02:44:57.229348 | orchestrator | Monday 06 April 2026 02:44:55 +0000 (0:00:01.088) 0:00:12.944 ********** 2026-04-06 02:44:57.229356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:44:57.229403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:45:23.018190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 02:45:23.018300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:45:23.018312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:45:23.018348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:45:23.018355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:45:23.018379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:45:23.018386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 02:45:23.018393 | orchestrator | 2026-04-06 02:45:23.018401 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 02:45:23.018409 | orchestrator | Monday 06 April 2026 02:44:57 +0000 (0:00:01.725) 0:00:14.669 ********** 2026-04-06 02:45:23.018415 | orchestrator | 2026-04-06 02:45:23.018424 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 02:45:23.018435 | orchestrator | Monday 06 April 2026 02:44:57 +0000 (0:00:00.346) 0:00:15.016 ********** 2026-04-06 02:45:23.018453 | orchestrator | 2026-04-06 02:45:23.018464 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 02:45:23.018476 | orchestrator | Monday 06 April 2026 02:44:57 +0000 (0:00:00.158) 0:00:15.174 ********** 2026-04-06 02:45:23.018486 | orchestrator | 2026-04-06 02:45:23.018496 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 02:45:23.018507 | orchestrator | Monday 06 April 2026 02:44:57 +0000 (0:00:00.140) 0:00:15.315 ********** 2026-04-06 02:45:23.018517 | orchestrator | 2026-04-06 02:45:23.018528 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 02:45:23.018537 | orchestrator | Monday 06 April 2026 02:44:58 +0000 (0:00:00.136) 0:00:15.452 ********** 2026-04-06 02:45:23.018543 | orchestrator | 2026-04-06 02:45:23.018550 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 02:45:23.018556 | orchestrator | Monday 06 April 2026 02:44:58 +0000 (0:00:00.139) 0:00:15.591 ********** 2026-04-06 02:45:23.018562 | orchestrator | 2026-04-06 02:45:23.018568 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-06 02:45:23.018574 | orchestrator | Monday 06 April 2026 02:44:58 +0000 (0:00:00.158) 0:00:15.749 ********** 2026-04-06 02:45:23.018581 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:45:23.018588 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:45:23.018595 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:45:23.018604 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:45:23.018615 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:45:23.018624 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:45:23.018632 | orchestrator | 2026-04-06 02:45:23.018642 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-06 02:45:23.018653 | orchestrator | Monday 06 April 2026 02:45:07 +0000 (0:00:08.801) 0:00:24.551 ********** 2026-04-06 02:45:23.018666 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:45:23.018675 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:45:23.018684 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:45:23.018697 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:45:23.018704 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:45:23.018712 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:45:23.018719 | orchestrator | 2026-04-06 02:45:23.018727 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-06 02:45:23.018737 | orchestrator | Monday 06 April 2026 02:45:08 +0000 (0:00:01.127) 0:00:25.678 ********** 2026-04-06 02:45:23.018748 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:45:23.018758 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:45:23.018768 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:45:23.018784 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:45:23.018796 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:45:23.018806 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:45:23.018816 | orchestrator | 2026-04-06 02:45:23.018826 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-06 02:45:23.018836 | orchestrator | Monday 06 April 2026 02:45:16 +0000 (0:00:08.064) 0:00:33.743 ********** 2026-04-06 02:45:23.018846 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-06 02:45:23.018856 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-06 02:45:23.018866 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-06 02:45:23.018876 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-06 02:45:23.018915 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-06 02:45:23.018926 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-06 02:45:23.018937 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-06 02:45:23.018966 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-06 02:45:36.210569 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-06 02:45:36.210709 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-06 02:45:36.210726 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-06 02:45:36.210738 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-06 02:45:36.210750 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 02:45:36.210761 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 02:45:36.210772 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 02:45:36.210795 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 02:45:36.210806 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 02:45:36.210817 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 02:45:36.210829 | orchestrator | 2026-04-06 02:45:36.210841 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-06 02:45:36.210853 | orchestrator | Monday 06 April 2026 02:45:23 +0000 (0:00:06.625) 0:00:40.368 ********** 2026-04-06 02:45:36.210866 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-06 02:45:36.210878 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:45:36.210890 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-06 02:45:36.210931 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:45:36.210944 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-06 02:45:36.210955 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:45:36.210966 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-06 02:45:36.210977 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-06 02:45:36.210988 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-06 02:45:36.210999 | orchestrator | 2026-04-06 02:45:36.211011 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-06 02:45:36.211022 | orchestrator | Monday 06 April 2026 02:45:25 +0000 (0:00:02.483) 0:00:42.851 ********** 2026-04-06 02:45:36.211033 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-06 02:45:36.211044 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:45:36.211055 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-06 02:45:36.211066 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:45:36.211077 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-06 02:45:36.211088 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:45:36.211099 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-06 02:45:36.211110 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-06 02:45:36.211138 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-06 02:45:36.211150 | orchestrator | 2026-04-06 02:45:36.211161 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-06 02:45:36.211172 | orchestrator | Monday 06 April 2026 02:45:28 +0000 (0:00:03.112) 0:00:45.964 ********** 2026-04-06 02:45:36.211183 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:45:36.211194 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:45:36.211225 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:45:36.211236 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:45:36.211247 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:45:36.211258 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:45:36.211268 | orchestrator | 2026-04-06 02:45:36.211280 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:45:36.211292 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 02:45:36.211305 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 02:45:36.211316 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 02:45:36.211327 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 02:45:36.211338 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 02:45:36.211362 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 02:45:36.211383 | orchestrator | 2026-04-06 02:45:36.211395 | orchestrator | 2026-04-06 02:45:36.211406 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:45:36.211417 | orchestrator | Monday 06 April 2026 02:45:35 +0000 (0:00:07.147) 0:00:53.111 ********** 2026-04-06 02:45:36.211447 | orchestrator | =============================================================================== 2026-04-06 02:45:36.211459 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.21s 2026-04-06 02:45:36.211470 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.80s 2026-04-06 02:45:36.211481 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.63s 2026-04-06 02:45:36.211492 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.11s 2026-04-06 02:45:36.211502 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.48s 2026-04-06 02:45:36.211513 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.35s 2026-04-06 02:45:36.211524 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.73s 2026-04-06 02:45:36.211534 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.60s 2026-04-06 02:45:36.211545 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.44s 2026-04-06 02:45:36.211557 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.28s 2026-04-06 02:45:36.211576 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.27s 2026-04-06 02:45:36.211594 | orchestrator | module-load : Load modules ---------------------------------------------- 1.21s 2026-04-06 02:45:36.211615 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.13s 2026-04-06 02:45:36.211635 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.09s 2026-04-06 02:45:36.211656 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.08s 2026-04-06 02:45:36.211675 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.89s 2026-04-06 02:45:36.211693 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2026-04-06 02:45:36.211705 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2026-04-06 02:45:38.809602 | orchestrator | 2026-04-06 02:45:38 | INFO  | Task 606a18f6-1a7b-4022-a82d-656914835d51 (ovn) was prepared for execution. 2026-04-06 02:45:38.809689 | orchestrator | 2026-04-06 02:45:38 | INFO  | It takes a moment until task 606a18f6-1a7b-4022-a82d-656914835d51 (ovn) has been started and output is visible here. 2026-04-06 02:45:50.219786 | orchestrator | 2026-04-06 02:45:50.219907 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 02:45:50.220056 | orchestrator | 2026-04-06 02:45:50.220088 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 02:45:50.220109 | orchestrator | Monday 06 April 2026 02:45:43 +0000 (0:00:00.181) 0:00:00.181 ********** 2026-04-06 02:45:50.220127 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:45:50.220146 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:45:50.220181 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:45:50.220199 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:45:50.220216 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:45:50.220232 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:45:50.220250 | orchestrator | 2026-04-06 02:45:50.220272 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 02:45:50.220315 | orchestrator | Monday 06 April 2026 02:45:44 +0000 (0:00:00.797) 0:00:00.979 ********** 2026-04-06 02:45:50.220339 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-06 02:45:50.220362 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-06 02:45:50.220383 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-06 02:45:50.220410 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-06 02:45:50.220437 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-06 02:45:50.220456 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-06 02:45:50.220475 | orchestrator | 2026-04-06 02:45:50.220493 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-06 02:45:50.220512 | orchestrator | 2026-04-06 02:45:50.220531 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-06 02:45:50.220549 | orchestrator | Monday 06 April 2026 02:45:44 +0000 (0:00:00.883) 0:00:01.862 ********** 2026-04-06 02:45:50.220571 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:45:50.220594 | orchestrator | 2026-04-06 02:45:50.220616 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-06 02:45:50.220639 | orchestrator | Monday 06 April 2026 02:45:46 +0000 (0:00:01.273) 0:00:03.135 ********** 2026-04-06 02:45:50.220665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.220701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.220722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.220742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.220798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.220850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.220870 | orchestrator | 2026-04-06 02:45:50.220888 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-06 02:45:50.220908 | orchestrator | Monday 06 April 2026 02:45:47 +0000 (0:00:01.189) 0:00:04.325 ********** 2026-04-06 02:45:50.220964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.220984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.221001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.221017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.221035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.221053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.221085 | orchestrator | 2026-04-06 02:45:50.221102 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-06 02:45:50.221118 | orchestrator | Monday 06 April 2026 02:45:48 +0000 (0:00:01.511) 0:00:05.836 ********** 2026-04-06 02:45:50.221135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.221154 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:45:50.221187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499735 | orchestrator | 2026-04-06 02:46:13.499741 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-06 02:46:13.499748 | orchestrator | Monday 06 April 2026 02:45:50 +0000 (0:00:01.236) 0:00:07.073 ********** 2026-04-06 02:46:13.499753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499773 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499813 | orchestrator | 2026-04-06 02:46:13.499819 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-06 02:46:13.499824 | orchestrator | Monday 06 April 2026 02:45:51 +0000 (0:00:01.625) 0:00:08.699 ********** 2026-04-06 02:46:13.499835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:46:13.499871 | orchestrator | 2026-04-06 02:46:13.499877 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-06 02:46:13.499882 | orchestrator | Monday 06 April 2026 02:45:53 +0000 (0:00:01.483) 0:00:10.183 ********** 2026-04-06 02:46:13.499887 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:46:13.499894 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:46:13.499899 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:46:13.499904 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:46:13.499909 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:46:13.499914 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:46:13.499919 | orchestrator | 2026-04-06 02:46:13.499924 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-06 02:46:13.499929 | orchestrator | Monday 06 April 2026 02:45:55 +0000 (0:00:02.458) 0:00:12.641 ********** 2026-04-06 02:46:13.499935 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-06 02:46:13.499940 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-06 02:46:13.499945 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-06 02:46:13.499996 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-06 02:46:13.500003 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-06 02:46:13.500008 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-06 02:46:13.500018 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 02:46:56.048751 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 02:46:56.048877 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 02:46:56.048911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 02:46:56.048923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 02:46:56.048935 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 02:46:56.048946 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-06 02:46:56.048959 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-06 02:46:56.048997 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-06 02:46:56.049069 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-06 02:46:56.049082 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-06 02:46:56.049094 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-06 02:46:56.049106 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 02:46:56.049118 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 02:46:56.049129 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 02:46:56.049140 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 02:46:56.049152 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 02:46:56.049163 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 02:46:56.049174 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 02:46:56.049185 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 02:46:56.049196 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 02:46:56.049206 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 02:46:56.049217 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 02:46:56.049228 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 02:46:56.049239 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 02:46:56.049251 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 02:46:56.049265 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 02:46:56.049278 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 02:46:56.049291 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 02:46:56.049304 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 02:46:56.049317 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-06 02:46:56.049331 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-06 02:46:56.049344 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-06 02:46:56.049358 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-06 02:46:56.049370 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-06 02:46:56.049383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-06 02:46:56.049396 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-06 02:46:56.049439 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-06 02:46:56.049453 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-06 02:46:56.049473 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-06 02:46:56.049486 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-06 02:46:56.049497 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-06 02:46:56.049508 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-06 02:46:56.049519 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-06 02:46:56.049530 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-06 02:46:56.049541 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-06 02:46:56.049552 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-06 02:46:56.049563 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-06 02:46:56.049573 | orchestrator | 2026-04-06 02:46:56.049585 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 02:46:56.049596 | orchestrator | Monday 06 April 2026 02:46:12 +0000 (0:00:17.052) 0:00:29.694 ********** 2026-04-06 02:46:56.049607 | orchestrator | 2026-04-06 02:46:56.049619 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 02:46:56.049630 | orchestrator | Monday 06 April 2026 02:46:13 +0000 (0:00:00.274) 0:00:29.969 ********** 2026-04-06 02:46:56.049640 | orchestrator | 2026-04-06 02:46:56.049651 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 02:46:56.049662 | orchestrator | Monday 06 April 2026 02:46:13 +0000 (0:00:00.065) 0:00:30.034 ********** 2026-04-06 02:46:56.049673 | orchestrator | 2026-04-06 02:46:56.049684 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 02:46:56.049694 | orchestrator | Monday 06 April 2026 02:46:13 +0000 (0:00:00.077) 0:00:30.112 ********** 2026-04-06 02:46:56.049705 | orchestrator | 2026-04-06 02:46:56.049716 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 02:46:56.049727 | orchestrator | Monday 06 April 2026 02:46:13 +0000 (0:00:00.077) 0:00:30.190 ********** 2026-04-06 02:46:56.049738 | orchestrator | 2026-04-06 02:46:56.049749 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 02:46:56.049759 | orchestrator | Monday 06 April 2026 02:46:13 +0000 (0:00:00.087) 0:00:30.277 ********** 2026-04-06 02:46:56.049770 | orchestrator | 2026-04-06 02:46:56.049781 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-06 02:46:56.049792 | orchestrator | Monday 06 April 2026 02:46:13 +0000 (0:00:00.068) 0:00:30.345 ********** 2026-04-06 02:46:56.049803 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:46:56.049815 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:46:56.049826 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:46:56.049837 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:46:56.049847 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:46:56.049858 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:46:56.049869 | orchestrator | 2026-04-06 02:46:56.049880 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-06 02:46:56.049891 | orchestrator | Monday 06 April 2026 02:46:15 +0000 (0:00:01.674) 0:00:32.020 ********** 2026-04-06 02:46:56.049910 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:46:56.049921 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:46:56.049932 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:46:56.049943 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:46:56.049953 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:46:56.049964 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:46:56.049975 | orchestrator | 2026-04-06 02:46:56.049986 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-06 02:46:56.049997 | orchestrator | 2026-04-06 02:46:56.050083 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-06 02:46:56.050097 | orchestrator | Monday 06 April 2026 02:46:53 +0000 (0:00:38.578) 0:01:10.598 ********** 2026-04-06 02:46:56.050108 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:46:56.050119 | orchestrator | 2026-04-06 02:46:56.050130 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-06 02:46:56.050141 | orchestrator | Monday 06 April 2026 02:46:54 +0000 (0:00:00.746) 0:01:11.344 ********** 2026-04-06 02:46:56.050152 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:46:56.050163 | orchestrator | 2026-04-06 02:46:56.050174 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-06 02:46:56.050185 | orchestrator | Monday 06 April 2026 02:46:55 +0000 (0:00:00.576) 0:01:11.921 ********** 2026-04-06 02:46:56.050196 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:46:56.050207 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:46:56.050218 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:46:56.050229 | orchestrator | 2026-04-06 02:46:56.050240 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-06 02:46:56.050269 | orchestrator | Monday 06 April 2026 02:46:56 +0000 (0:00:00.978) 0:01:12.899 ********** 2026-04-06 02:47:08.130103 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:08.130191 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:08.130198 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:08.130204 | orchestrator | 2026-04-06 02:47:08.130210 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-06 02:47:08.130226 | orchestrator | Monday 06 April 2026 02:46:56 +0000 (0:00:00.376) 0:01:13.276 ********** 2026-04-06 02:47:08.130232 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:08.130236 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:08.130241 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:08.130246 | orchestrator | 2026-04-06 02:47:08.130251 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-06 02:47:08.130256 | orchestrator | Monday 06 April 2026 02:46:56 +0000 (0:00:00.334) 0:01:13.610 ********** 2026-04-06 02:47:08.130261 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:08.130266 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:08.130270 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:08.130275 | orchestrator | 2026-04-06 02:47:08.130279 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-06 02:47:08.130284 | orchestrator | Monday 06 April 2026 02:46:57 +0000 (0:00:00.359) 0:01:13.970 ********** 2026-04-06 02:47:08.130289 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:08.130293 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:08.130298 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:08.130302 | orchestrator | 2026-04-06 02:47:08.130307 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-06 02:47:08.130312 | orchestrator | Monday 06 April 2026 02:46:57 +0000 (0:00:00.553) 0:01:14.524 ********** 2026-04-06 02:47:08.130317 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130322 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130327 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130331 | orchestrator | 2026-04-06 02:47:08.130336 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-06 02:47:08.130354 | orchestrator | Monday 06 April 2026 02:46:57 +0000 (0:00:00.335) 0:01:14.860 ********** 2026-04-06 02:47:08.130359 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130364 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130368 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130373 | orchestrator | 2026-04-06 02:47:08.130377 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-06 02:47:08.130382 | orchestrator | Monday 06 April 2026 02:46:58 +0000 (0:00:00.360) 0:01:15.220 ********** 2026-04-06 02:47:08.130387 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130391 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130396 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130400 | orchestrator | 2026-04-06 02:47:08.130405 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-06 02:47:08.130409 | orchestrator | Monday 06 April 2026 02:46:58 +0000 (0:00:00.315) 0:01:15.535 ********** 2026-04-06 02:47:08.130414 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130419 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130423 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130428 | orchestrator | 2026-04-06 02:47:08.130433 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-06 02:47:08.130437 | orchestrator | Monday 06 April 2026 02:46:59 +0000 (0:00:00.341) 0:01:15.877 ********** 2026-04-06 02:47:08.130442 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130447 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130451 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130456 | orchestrator | 2026-04-06 02:47:08.130461 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-06 02:47:08.130465 | orchestrator | Monday 06 April 2026 02:46:59 +0000 (0:00:00.562) 0:01:16.439 ********** 2026-04-06 02:47:08.130470 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130474 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130479 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130483 | orchestrator | 2026-04-06 02:47:08.130488 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-06 02:47:08.130493 | orchestrator | Monday 06 April 2026 02:46:59 +0000 (0:00:00.331) 0:01:16.771 ********** 2026-04-06 02:47:08.130497 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130502 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130506 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130511 | orchestrator | 2026-04-06 02:47:08.130516 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-06 02:47:08.130520 | orchestrator | Monday 06 April 2026 02:47:00 +0000 (0:00:00.330) 0:01:17.101 ********** 2026-04-06 02:47:08.130525 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130529 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130534 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130539 | orchestrator | 2026-04-06 02:47:08.130543 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-06 02:47:08.130548 | orchestrator | Monday 06 April 2026 02:47:00 +0000 (0:00:00.340) 0:01:17.441 ********** 2026-04-06 02:47:08.130552 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130557 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130561 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130566 | orchestrator | 2026-04-06 02:47:08.130571 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-06 02:47:08.130575 | orchestrator | Monday 06 April 2026 02:47:01 +0000 (0:00:00.574) 0:01:18.015 ********** 2026-04-06 02:47:08.130580 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130584 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130589 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130594 | orchestrator | 2026-04-06 02:47:08.130599 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-06 02:47:08.130610 | orchestrator | Monday 06 April 2026 02:47:01 +0000 (0:00:00.327) 0:01:18.343 ********** 2026-04-06 02:47:08.130616 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130621 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130626 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130631 | orchestrator | 2026-04-06 02:47:08.130637 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-06 02:47:08.130642 | orchestrator | Monday 06 April 2026 02:47:01 +0000 (0:00:00.395) 0:01:18.738 ********** 2026-04-06 02:47:08.130661 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130667 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130672 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130678 | orchestrator | 2026-04-06 02:47:08.130683 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-06 02:47:08.130691 | orchestrator | Monday 06 April 2026 02:47:02 +0000 (0:00:00.329) 0:01:19.068 ********** 2026-04-06 02:47:08.130698 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:47:08.130704 | orchestrator | 2026-04-06 02:47:08.130709 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-06 02:47:08.130715 | orchestrator | Monday 06 April 2026 02:47:03 +0000 (0:00:00.812) 0:01:19.881 ********** 2026-04-06 02:47:08.130729 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:08.130734 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:08.130740 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:08.130751 | orchestrator | 2026-04-06 02:47:08.130757 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-06 02:47:08.130762 | orchestrator | Monday 06 April 2026 02:47:03 +0000 (0:00:00.446) 0:01:20.327 ********** 2026-04-06 02:47:08.130768 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:08.130773 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:08.130778 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:08.130784 | orchestrator | 2026-04-06 02:47:08.130789 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-06 02:47:08.130794 | orchestrator | Monday 06 April 2026 02:47:04 +0000 (0:00:00.576) 0:01:20.903 ********** 2026-04-06 02:47:08.130800 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130806 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130811 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130816 | orchestrator | 2026-04-06 02:47:08.130822 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-06 02:47:08.130828 | orchestrator | Monday 06 April 2026 02:47:04 +0000 (0:00:00.365) 0:01:21.269 ********** 2026-04-06 02:47:08.130832 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130837 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130841 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130846 | orchestrator | 2026-04-06 02:47:08.130851 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-06 02:47:08.130856 | orchestrator | Monday 06 April 2026 02:47:04 +0000 (0:00:00.578) 0:01:21.848 ********** 2026-04-06 02:47:08.130860 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130865 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130869 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130874 | orchestrator | 2026-04-06 02:47:08.130879 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-06 02:47:08.130883 | orchestrator | Monday 06 April 2026 02:47:05 +0000 (0:00:00.406) 0:01:22.255 ********** 2026-04-06 02:47:08.130888 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130892 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130897 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130902 | orchestrator | 2026-04-06 02:47:08.130906 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-06 02:47:08.130911 | orchestrator | Monday 06 April 2026 02:47:05 +0000 (0:00:00.393) 0:01:22.648 ********** 2026-04-06 02:47:08.130923 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130927 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130932 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130937 | orchestrator | 2026-04-06 02:47:08.130941 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-06 02:47:08.130946 | orchestrator | Monday 06 April 2026 02:47:06 +0000 (0:00:00.335) 0:01:22.984 ********** 2026-04-06 02:47:08.130951 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:08.130955 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:08.130960 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:08.130964 | orchestrator | 2026-04-06 02:47:08.130969 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-06 02:47:08.130974 | orchestrator | Monday 06 April 2026 02:47:06 +0000 (0:00:00.620) 0:01:23.605 ********** 2026-04-06 02:47:08.130981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:08.130988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:08.130993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:08.131005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445246 | orchestrator | 2026-04-06 02:47:14.445255 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-06 02:47:14.445265 | orchestrator | Monday 06 April 2026 02:47:08 +0000 (0:00:01.374) 0:01:24.979 ********** 2026-04-06 02:47:14.445275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445386 | orchestrator | 2026-04-06 02:47:14.445394 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-06 02:47:14.445401 | orchestrator | Monday 06 April 2026 02:47:11 +0000 (0:00:03.856) 0:01:28.836 ********** 2026-04-06 02:47:14.445409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:14.445458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.823731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.823905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.823922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.823935 | orchestrator | 2026-04-06 02:47:33.823949 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-06 02:47:33.823962 | orchestrator | Monday 06 April 2026 02:47:13 +0000 (0:00:01.992) 0:01:30.828 ********** 2026-04-06 02:47:33.823978 | orchestrator | 2026-04-06 02:47:33.823998 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-06 02:47:33.824024 | orchestrator | Monday 06 April 2026 02:47:14 +0000 (0:00:00.076) 0:01:30.904 ********** 2026-04-06 02:47:33.824053 | orchestrator | 2026-04-06 02:47:33.824105 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-06 02:47:33.824123 | orchestrator | Monday 06 April 2026 02:47:14 +0000 (0:00:00.076) 0:01:30.980 ********** 2026-04-06 02:47:33.824141 | orchestrator | 2026-04-06 02:47:33.824160 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-06 02:47:33.824178 | orchestrator | Monday 06 April 2026 02:47:14 +0000 (0:00:00.311) 0:01:31.292 ********** 2026-04-06 02:47:33.824197 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:47:33.824218 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:47:33.824238 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:47:33.824257 | orchestrator | 2026-04-06 02:47:33.824278 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-06 02:47:33.824298 | orchestrator | Monday 06 April 2026 02:47:21 +0000 (0:00:07.510) 0:01:38.802 ********** 2026-04-06 02:47:33.824322 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:47:33.824348 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:47:33.824373 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:47:33.824395 | orchestrator | 2026-04-06 02:47:33.824416 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-06 02:47:33.824436 | orchestrator | Monday 06 April 2026 02:47:24 +0000 (0:00:02.542) 0:01:41.345 ********** 2026-04-06 02:47:33.824455 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:47:33.824475 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:47:33.824495 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:47:33.824515 | orchestrator | 2026-04-06 02:47:33.824537 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-06 02:47:33.824559 | orchestrator | Monday 06 April 2026 02:47:26 +0000 (0:00:02.501) 0:01:43.846 ********** 2026-04-06 02:47:33.824582 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:47:33.824604 | orchestrator | 2026-04-06 02:47:33.824622 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-06 02:47:33.824641 | orchestrator | Monday 06 April 2026 02:47:27 +0000 (0:00:00.129) 0:01:43.976 ********** 2026-04-06 02:47:33.824660 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:33.824679 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:33.824697 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:33.824714 | orchestrator | 2026-04-06 02:47:33.824732 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-06 02:47:33.824750 | orchestrator | Monday 06 April 2026 02:47:28 +0000 (0:00:01.064) 0:01:45.040 ********** 2026-04-06 02:47:33.824767 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:33.824811 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:33.824833 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:47:33.824853 | orchestrator | 2026-04-06 02:47:33.824874 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-06 02:47:33.824895 | orchestrator | Monday 06 April 2026 02:47:28 +0000 (0:00:00.616) 0:01:45.657 ********** 2026-04-06 02:47:33.824916 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:33.824937 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:33.824957 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:33.824978 | orchestrator | 2026-04-06 02:47:33.824999 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-06 02:47:33.825044 | orchestrator | Monday 06 April 2026 02:47:29 +0000 (0:00:00.748) 0:01:46.406 ********** 2026-04-06 02:47:33.825094 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:47:33.825112 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:47:33.825129 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:47:33.825149 | orchestrator | 2026-04-06 02:47:33.825169 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-06 02:47:33.825190 | orchestrator | Monday 06 April 2026 02:47:30 +0000 (0:00:00.631) 0:01:47.038 ********** 2026-04-06 02:47:33.825210 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:33.825231 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:33.825285 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:33.825307 | orchestrator | 2026-04-06 02:47:33.825327 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-06 02:47:33.825348 | orchestrator | Monday 06 April 2026 02:47:30 +0000 (0:00:00.763) 0:01:47.802 ********** 2026-04-06 02:47:33.825371 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:33.825391 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:33.825412 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:33.825432 | orchestrator | 2026-04-06 02:47:33.825452 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-06 02:47:33.825471 | orchestrator | Monday 06 April 2026 02:47:31 +0000 (0:00:01.061) 0:01:48.863 ********** 2026-04-06 02:47:33.825490 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:47:33.825508 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:47:33.825526 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:47:33.825546 | orchestrator | 2026-04-06 02:47:33.825559 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-06 02:47:33.825570 | orchestrator | Monday 06 April 2026 02:47:32 +0000 (0:00:00.346) 0:01:49.209 ********** 2026-04-06 02:47:33.825584 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.825600 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.825611 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.825625 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.825664 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.825684 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.825702 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.825732 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:33.825768 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.055896 | orchestrator | 2026-04-06 02:47:41.055999 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-06 02:47:41.056011 | orchestrator | Monday 06 April 2026 02:47:33 +0000 (0:00:01.455) 0:01:50.665 ********** 2026-04-06 02:47:41.056021 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056031 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056038 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056046 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056134 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056169 | orchestrator | 2026-04-06 02:47:41.056176 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-06 02:47:41.056183 | orchestrator | Monday 06 April 2026 02:47:37 +0000 (0:00:03.891) 0:01:54.557 ********** 2026-04-06 02:47:41.056206 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056213 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056220 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056227 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056255 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 02:47:41.056278 | orchestrator | 2026-04-06 02:47:41.056284 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-06 02:47:41.056290 | orchestrator | Monday 06 April 2026 02:47:40 +0000 (0:00:03.124) 0:01:57.682 ********** 2026-04-06 02:47:41.056296 | orchestrator | 2026-04-06 02:47:41.056303 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-06 02:47:41.056309 | orchestrator | Monday 06 April 2026 02:47:40 +0000 (0:00:00.075) 0:01:57.757 ********** 2026-04-06 02:47:41.056315 | orchestrator | 2026-04-06 02:47:41.056321 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-06 02:47:41.056327 | orchestrator | Monday 06 April 2026 02:47:40 +0000 (0:00:00.073) 0:01:57.830 ********** 2026-04-06 02:47:41.056333 | orchestrator | 2026-04-06 02:47:41.056345 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-06 02:48:05.484909 | orchestrator | Monday 06 April 2026 02:47:41 +0000 (0:00:00.068) 0:01:57.899 ********** 2026-04-06 02:48:05.485012 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:48:05.485025 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:48:05.485034 | orchestrator | 2026-04-06 02:48:05.485044 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-06 02:48:05.485052 | orchestrator | Monday 06 April 2026 02:47:47 +0000 (0:00:06.236) 0:02:04.135 ********** 2026-04-06 02:48:05.485060 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:48:05.485069 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:48:05.485077 | orchestrator | 2026-04-06 02:48:05.485166 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-06 02:48:05.485177 | orchestrator | Monday 06 April 2026 02:47:53 +0000 (0:00:06.254) 0:02:10.390 ********** 2026-04-06 02:48:05.485185 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:48:05.485193 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:48:05.485201 | orchestrator | 2026-04-06 02:48:05.485209 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-06 02:48:05.485217 | orchestrator | Monday 06 April 2026 02:47:59 +0000 (0:00:06.182) 0:02:16.573 ********** 2026-04-06 02:48:05.485226 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:48:05.485234 | orchestrator | 2026-04-06 02:48:05.485242 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-06 02:48:05.485250 | orchestrator | Monday 06 April 2026 02:47:59 +0000 (0:00:00.150) 0:02:16.723 ********** 2026-04-06 02:48:05.485258 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:48:05.485267 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:48:05.485275 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:48:05.485283 | orchestrator | 2026-04-06 02:48:05.485291 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-06 02:48:05.485299 | orchestrator | Monday 06 April 2026 02:48:00 +0000 (0:00:01.082) 0:02:17.806 ********** 2026-04-06 02:48:05.485307 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:48:05.485315 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:48:05.485323 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:48:05.485331 | orchestrator | 2026-04-06 02:48:05.485340 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-06 02:48:05.485348 | orchestrator | Monday 06 April 2026 02:48:01 +0000 (0:00:00.655) 0:02:18.461 ********** 2026-04-06 02:48:05.485356 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:48:05.485364 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:48:05.485373 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:48:05.485381 | orchestrator | 2026-04-06 02:48:05.485389 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-06 02:48:05.485397 | orchestrator | Monday 06 April 2026 02:48:02 +0000 (0:00:00.857) 0:02:19.319 ********** 2026-04-06 02:48:05.485405 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:48:05.485413 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:48:05.485421 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:48:05.485429 | orchestrator | 2026-04-06 02:48:05.485437 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-06 02:48:05.485447 | orchestrator | Monday 06 April 2026 02:48:03 +0000 (0:00:00.621) 0:02:19.941 ********** 2026-04-06 02:48:05.485456 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:48:05.485466 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:48:05.485476 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:48:05.485485 | orchestrator | 2026-04-06 02:48:05.485495 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-06 02:48:05.485504 | orchestrator | Monday 06 April 2026 02:48:04 +0000 (0:00:01.045) 0:02:20.987 ********** 2026-04-06 02:48:05.485513 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:48:05.485523 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:48:05.485533 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:48:05.485542 | orchestrator | 2026-04-06 02:48:05.485551 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:48:05.485562 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-06 02:48:05.485573 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-06 02:48:05.485582 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-06 02:48:05.485592 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:48:05.485608 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:48:05.485618 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 02:48:05.485627 | orchestrator | 2026-04-06 02:48:05.485637 | orchestrator | 2026-04-06 02:48:05.485672 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:48:05.485682 | orchestrator | Monday 06 April 2026 02:48:05 +0000 (0:00:00.902) 0:02:21.890 ********** 2026-04-06 02:48:05.485692 | orchestrator | =============================================================================== 2026-04-06 02:48:05.485701 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 38.58s 2026-04-06 02:48:05.485711 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.05s 2026-04-06 02:48:05.485720 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.75s 2026-04-06 02:48:05.485729 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.80s 2026-04-06 02:48:05.485739 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.68s 2026-04-06 02:48:05.485763 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.89s 2026-04-06 02:48:05.485773 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.86s 2026-04-06 02:48:05.485783 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.12s 2026-04-06 02:48:05.485795 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.46s 2026-04-06 02:48:05.485808 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.99s 2026-04-06 02:48:05.485826 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.67s 2026-04-06 02:48:05.485847 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.63s 2026-04-06 02:48:05.485859 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.51s 2026-04-06 02:48:05.485873 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.48s 2026-04-06 02:48:05.485888 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2026-04-06 02:48:05.485901 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.37s 2026-04-06 02:48:05.485914 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.27s 2026-04-06 02:48:05.485928 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.24s 2026-04-06 02:48:05.485942 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.19s 2026-04-06 02:48:05.485957 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.08s 2026-04-06 02:48:05.840486 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-06 02:48:05.840607 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-04-06 02:48:08.234310 | orchestrator | 2026-04-06 02:48:08 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-06 02:48:18.505673 | orchestrator | 2026-04-06 02:48:18 | INFO  | Task edf69314-7f21-4d94-bf07-2d9b9db5202f (wipe-partitions) was prepared for execution. 2026-04-06 02:48:18.505752 | orchestrator | 2026-04-06 02:48:18 | INFO  | It takes a moment until task edf69314-7f21-4d94-bf07-2d9b9db5202f (wipe-partitions) has been started and output is visible here. 2026-04-06 02:48:32.185842 | orchestrator | 2026-04-06 02:48:32.185968 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-06 02:48:32.185989 | orchestrator | 2026-04-06 02:48:32.186003 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-06 02:48:32.186077 | orchestrator | Monday 06 April 2026 02:48:23 +0000 (0:00:00.146) 0:00:00.146 ********** 2026-04-06 02:48:32.186167 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:48:32.186183 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:48:32.186195 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:48:32.186206 | orchestrator | 2026-04-06 02:48:32.186219 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-06 02:48:32.186231 | orchestrator | Monday 06 April 2026 02:48:23 +0000 (0:00:00.753) 0:00:00.899 ********** 2026-04-06 02:48:32.186243 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:48:32.186255 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:48:32.186267 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:48:32.186279 | orchestrator | 2026-04-06 02:48:32.186292 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-06 02:48:32.186304 | orchestrator | Monday 06 April 2026 02:48:24 +0000 (0:00:00.418) 0:00:01.318 ********** 2026-04-06 02:48:32.186317 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:48:32.186330 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:48:32.186342 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:48:32.186354 | orchestrator | 2026-04-06 02:48:32.186366 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-06 02:48:32.186380 | orchestrator | Monday 06 April 2026 02:48:25 +0000 (0:00:00.611) 0:00:01.930 ********** 2026-04-06 02:48:32.186396 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:48:32.186412 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:48:32.186431 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:48:32.186446 | orchestrator | 2026-04-06 02:48:32.186460 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-06 02:48:32.186474 | orchestrator | Monday 06 April 2026 02:48:25 +0000 (0:00:00.311) 0:00:02.241 ********** 2026-04-06 02:48:32.186488 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-06 02:48:32.186504 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-06 02:48:32.186518 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-06 02:48:32.186533 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-06 02:48:32.186548 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-06 02:48:32.186563 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-06 02:48:32.186594 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-06 02:48:32.186608 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-06 02:48:32.186620 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-06 02:48:32.186632 | orchestrator | 2026-04-06 02:48:32.186648 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-06 02:48:32.186663 | orchestrator | Monday 06 April 2026 02:48:26 +0000 (0:00:01.270) 0:00:03.512 ********** 2026-04-06 02:48:32.186679 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-06 02:48:32.186691 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-06 02:48:32.186704 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-06 02:48:32.186717 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-06 02:48:32.186730 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-06 02:48:32.186743 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-06 02:48:32.186757 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-06 02:48:32.186770 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-06 02:48:32.186783 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-06 02:48:32.186796 | orchestrator | 2026-04-06 02:48:32.186808 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-06 02:48:32.186822 | orchestrator | Monday 06 April 2026 02:48:28 +0000 (0:00:01.657) 0:00:05.169 ********** 2026-04-06 02:48:32.186835 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-06 02:48:32.186848 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-06 02:48:32.186861 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-06 02:48:32.186873 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-06 02:48:32.186899 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-06 02:48:32.186912 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-06 02:48:32.186925 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-06 02:48:32.186939 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-06 02:48:32.186952 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-06 02:48:32.186965 | orchestrator | 2026-04-06 02:48:32.186977 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-06 02:48:32.186990 | orchestrator | Monday 06 April 2026 02:48:30 +0000 (0:00:02.148) 0:00:07.317 ********** 2026-04-06 02:48:32.187004 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:48:32.187017 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:48:32.187029 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:48:32.187041 | orchestrator | 2026-04-06 02:48:32.187054 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-06 02:48:32.187066 | orchestrator | Monday 06 April 2026 02:48:31 +0000 (0:00:00.626) 0:00:07.944 ********** 2026-04-06 02:48:32.187079 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:48:32.187092 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:48:32.187105 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:48:32.187136 | orchestrator | 2026-04-06 02:48:32.187149 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:48:32.187164 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:48:32.187179 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:48:32.187214 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:48:32.187228 | orchestrator | 2026-04-06 02:48:32.187242 | orchestrator | 2026-04-06 02:48:32.187255 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:48:32.187269 | orchestrator | Monday 06 April 2026 02:48:31 +0000 (0:00:00.699) 0:00:08.644 ********** 2026-04-06 02:48:32.187282 | orchestrator | =============================================================================== 2026-04-06 02:48:32.187296 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.15s 2026-04-06 02:48:32.187308 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.66s 2026-04-06 02:48:32.187322 | orchestrator | Check device availability ----------------------------------------------- 1.27s 2026-04-06 02:48:32.187335 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.75s 2026-04-06 02:48:32.187348 | orchestrator | Request device events from the kernel ----------------------------------- 0.70s 2026-04-06 02:48:32.187360 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-04-06 02:48:32.187372 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.61s 2026-04-06 02:48:32.187383 | orchestrator | Remove all rook related logical devices --------------------------------- 0.42s 2026-04-06 02:48:32.187395 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.31s 2026-04-06 02:48:45.029551 | orchestrator | 2026-04-06 02:48:45 | INFO  | Task bd342569-67d1-4a4c-9cc9-9c17d609df0d (facts) was prepared for execution. 2026-04-06 02:48:45.029655 | orchestrator | 2026-04-06 02:48:45 | INFO  | It takes a moment until task bd342569-67d1-4a4c-9cc9-9c17d609df0d (facts) has been started and output is visible here. 2026-04-06 02:48:58.666633 | orchestrator | 2026-04-06 02:48:58.666730 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-06 02:48:58.666741 | orchestrator | 2026-04-06 02:48:58.666748 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-06 02:48:58.666778 | orchestrator | Monday 06 April 2026 02:48:49 +0000 (0:00:00.289) 0:00:00.289 ********** 2026-04-06 02:48:58.666784 | orchestrator | ok: [testbed-manager] 2026-04-06 02:48:58.666792 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:48:58.666798 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:48:58.666804 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:48:58.666809 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:48:58.666815 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:48:58.666824 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:48:58.666833 | orchestrator | 2026-04-06 02:48:58.666844 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-06 02:48:58.666860 | orchestrator | Monday 06 April 2026 02:48:50 +0000 (0:00:01.172) 0:00:01.461 ********** 2026-04-06 02:48:58.666869 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:48:58.666879 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:48:58.666889 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:48:58.666898 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:48:58.666907 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:48:58.666915 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:48:58.666924 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:48:58.666934 | orchestrator | 2026-04-06 02:48:58.666944 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-06 02:48:58.666953 | orchestrator | 2026-04-06 02:48:58.666963 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-06 02:48:58.666972 | orchestrator | Monday 06 April 2026 02:48:52 +0000 (0:00:01.363) 0:00:02.824 ********** 2026-04-06 02:48:58.666982 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:48:58.666991 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:48:58.667000 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:48:58.667008 | orchestrator | ok: [testbed-manager] 2026-04-06 02:48:58.667014 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:48:58.667020 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:48:58.667025 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:48:58.667031 | orchestrator | 2026-04-06 02:48:58.667037 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-06 02:48:58.667043 | orchestrator | 2026-04-06 02:48:58.667049 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-06 02:48:58.667055 | orchestrator | Monday 06 April 2026 02:48:57 +0000 (0:00:05.334) 0:00:08.158 ********** 2026-04-06 02:48:58.667061 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:48:58.667067 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:48:58.667073 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:48:58.667079 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:48:58.667084 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:48:58.667092 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:48:58.667101 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:48:58.667116 | orchestrator | 2026-04-06 02:48:58.667127 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:48:58.667161 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:48:58.667256 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:48:58.667280 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:48:58.667290 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:48:58.667299 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:48:58.667306 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:48:58.667321 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:48:58.667327 | orchestrator | 2026-04-06 02:48:58.667333 | orchestrator | 2026-04-06 02:48:58.667339 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:48:58.667345 | orchestrator | Monday 06 April 2026 02:48:58 +0000 (0:00:00.613) 0:00:08.772 ********** 2026-04-06 02:48:58.667351 | orchestrator | =============================================================================== 2026-04-06 02:48:58.667357 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.33s 2026-04-06 02:48:58.667363 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2026-04-06 02:48:58.667369 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-04-06 02:48:58.667375 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-04-06 02:49:01.358416 | orchestrator | 2026-04-06 02:49:01 | INFO  | Task 7944546c-1291-4dcb-8838-2154b855e08e (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-06 02:49:01.358500 | orchestrator | 2026-04-06 02:49:01 | INFO  | It takes a moment until task 7944546c-1291-4dcb-8838-2154b855e08e (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-06 02:49:14.761678 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-06 02:49:14.761794 | orchestrator | 2.16.14 2026-04-06 02:49:14.761812 | orchestrator | 2026-04-06 02:49:14.761824 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-06 02:49:14.761835 | orchestrator | 2026-04-06 02:49:14.761845 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-06 02:49:14.761856 | orchestrator | Monday 06 April 2026 02:49:06 +0000 (0:00:00.419) 0:00:00.419 ********** 2026-04-06 02:49:14.761867 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 02:49:14.761877 | orchestrator | 2026-04-06 02:49:14.761905 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-06 02:49:14.761915 | orchestrator | Monday 06 April 2026 02:49:06 +0000 (0:00:00.295) 0:00:00.715 ********** 2026-04-06 02:49:14.761925 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:49:14.761934 | orchestrator | 2026-04-06 02:49:14.761944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.761953 | orchestrator | Monday 06 April 2026 02:49:06 +0000 (0:00:00.276) 0:00:00.991 ********** 2026-04-06 02:49:14.761963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-06 02:49:14.761973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-06 02:49:14.761983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-06 02:49:14.761994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-06 02:49:14.762004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-06 02:49:14.762014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-06 02:49:14.762093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-06 02:49:14.762104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-06 02:49:14.762123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-06 02:49:14.762135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-06 02:49:14.762142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-06 02:49:14.762207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-06 02:49:14.762233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-06 02:49:14.762242 | orchestrator | 2026-04-06 02:49:14.762253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762263 | orchestrator | Monday 06 April 2026 02:49:07 +0000 (0:00:00.552) 0:00:01.544 ********** 2026-04-06 02:49:14.762274 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762286 | orchestrator | 2026-04-06 02:49:14.762297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762309 | orchestrator | Monday 06 April 2026 02:49:07 +0000 (0:00:00.216) 0:00:01.760 ********** 2026-04-06 02:49:14.762320 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762329 | orchestrator | 2026-04-06 02:49:14.762337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762344 | orchestrator | Monday 06 April 2026 02:49:07 +0000 (0:00:00.214) 0:00:01.974 ********** 2026-04-06 02:49:14.762351 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762358 | orchestrator | 2026-04-06 02:49:14.762366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762373 | orchestrator | Monday 06 April 2026 02:49:08 +0000 (0:00:00.286) 0:00:02.261 ********** 2026-04-06 02:49:14.762383 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762393 | orchestrator | 2026-04-06 02:49:14.762402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762412 | orchestrator | Monday 06 April 2026 02:49:08 +0000 (0:00:00.212) 0:00:02.474 ********** 2026-04-06 02:49:14.762422 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762432 | orchestrator | 2026-04-06 02:49:14.762443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762455 | orchestrator | Monday 06 April 2026 02:49:08 +0000 (0:00:00.222) 0:00:02.696 ********** 2026-04-06 02:49:14.762466 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762476 | orchestrator | 2026-04-06 02:49:14.762484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762492 | orchestrator | Monday 06 April 2026 02:49:08 +0000 (0:00:00.228) 0:00:02.925 ********** 2026-04-06 02:49:14.762499 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762507 | orchestrator | 2026-04-06 02:49:14.762514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762522 | orchestrator | Monday 06 April 2026 02:49:09 +0000 (0:00:00.222) 0:00:03.148 ********** 2026-04-06 02:49:14.762529 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762536 | orchestrator | 2026-04-06 02:49:14.762544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762556 | orchestrator | Monday 06 April 2026 02:49:09 +0000 (0:00:00.216) 0:00:03.364 ********** 2026-04-06 02:49:14.762567 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa) 2026-04-06 02:49:14.762579 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa) 2026-04-06 02:49:14.762589 | orchestrator | 2026-04-06 02:49:14.762599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762630 | orchestrator | Monday 06 April 2026 02:49:09 +0000 (0:00:00.434) 0:00:03.799 ********** 2026-04-06 02:49:14.762642 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527) 2026-04-06 02:49:14.762653 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527) 2026-04-06 02:49:14.762663 | orchestrator | 2026-04-06 02:49:14.762674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762683 | orchestrator | Monday 06 April 2026 02:49:10 +0000 (0:00:00.732) 0:00:04.531 ********** 2026-04-06 02:49:14.762703 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c) 2026-04-06 02:49:14.762726 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c) 2026-04-06 02:49:14.762736 | orchestrator | 2026-04-06 02:49:14.762747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762758 | orchestrator | Monday 06 April 2026 02:49:11 +0000 (0:00:00.740) 0:00:05.271 ********** 2026-04-06 02:49:14.762768 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103) 2026-04-06 02:49:14.762777 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103) 2026-04-06 02:49:14.762784 | orchestrator | 2026-04-06 02:49:14.762790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:14.762796 | orchestrator | Monday 06 April 2026 02:49:12 +0000 (0:00:00.992) 0:00:06.264 ********** 2026-04-06 02:49:14.762803 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-06 02:49:14.762809 | orchestrator | 2026-04-06 02:49:14.762815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:14.762822 | orchestrator | Monday 06 April 2026 02:49:12 +0000 (0:00:00.401) 0:00:06.666 ********** 2026-04-06 02:49:14.762828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-06 02:49:14.762834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-06 02:49:14.762840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-06 02:49:14.762847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-06 02:49:14.762853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-06 02:49:14.762859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-06 02:49:14.762865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-06 02:49:14.762871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-06 02:49:14.762877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-06 02:49:14.762884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-06 02:49:14.762890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-06 02:49:14.762896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-06 02:49:14.762902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-06 02:49:14.762908 | orchestrator | 2026-04-06 02:49:14.762914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:14.762920 | orchestrator | Monday 06 April 2026 02:49:13 +0000 (0:00:00.434) 0:00:07.100 ********** 2026-04-06 02:49:14.762927 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762933 | orchestrator | 2026-04-06 02:49:14.762939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:14.762945 | orchestrator | Monday 06 April 2026 02:49:13 +0000 (0:00:00.237) 0:00:07.338 ********** 2026-04-06 02:49:14.762952 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762958 | orchestrator | 2026-04-06 02:49:14.762964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:14.762970 | orchestrator | Monday 06 April 2026 02:49:13 +0000 (0:00:00.244) 0:00:07.582 ********** 2026-04-06 02:49:14.762977 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.762983 | orchestrator | 2026-04-06 02:49:14.762989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:14.762995 | orchestrator | Monday 06 April 2026 02:49:13 +0000 (0:00:00.230) 0:00:07.813 ********** 2026-04-06 02:49:14.763007 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.763014 | orchestrator | 2026-04-06 02:49:14.763020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:14.763026 | orchestrator | Monday 06 April 2026 02:49:14 +0000 (0:00:00.225) 0:00:08.038 ********** 2026-04-06 02:49:14.763032 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.763039 | orchestrator | 2026-04-06 02:49:14.763045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:14.763051 | orchestrator | Monday 06 April 2026 02:49:14 +0000 (0:00:00.260) 0:00:08.299 ********** 2026-04-06 02:49:14.763057 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.763063 | orchestrator | 2026-04-06 02:49:14.763069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:14.763076 | orchestrator | Monday 06 April 2026 02:49:14 +0000 (0:00:00.222) 0:00:08.521 ********** 2026-04-06 02:49:14.763082 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:14.763088 | orchestrator | 2026-04-06 02:49:14.763101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:23.299310 | orchestrator | Monday 06 April 2026 02:49:14 +0000 (0:00:00.248) 0:00:08.770 ********** 2026-04-06 02:49:23.299426 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.299443 | orchestrator | 2026-04-06 02:49:23.299456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:23.299468 | orchestrator | Monday 06 April 2026 02:49:14 +0000 (0:00:00.215) 0:00:08.985 ********** 2026-04-06 02:49:23.299479 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-06 02:49:23.299491 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-06 02:49:23.299518 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-06 02:49:23.299530 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-06 02:49:23.299541 | orchestrator | 2026-04-06 02:49:23.299552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:23.299564 | orchestrator | Monday 06 April 2026 02:49:16 +0000 (0:00:01.191) 0:00:10.177 ********** 2026-04-06 02:49:23.299575 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.299586 | orchestrator | 2026-04-06 02:49:23.299597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:23.299609 | orchestrator | Monday 06 April 2026 02:49:16 +0000 (0:00:00.268) 0:00:10.446 ********** 2026-04-06 02:49:23.299620 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.299631 | orchestrator | 2026-04-06 02:49:23.299642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:23.299653 | orchestrator | Monday 06 April 2026 02:49:16 +0000 (0:00:00.215) 0:00:10.661 ********** 2026-04-06 02:49:23.299664 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.299675 | orchestrator | 2026-04-06 02:49:23.299686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:23.299697 | orchestrator | Monday 06 April 2026 02:49:16 +0000 (0:00:00.234) 0:00:10.895 ********** 2026-04-06 02:49:23.299708 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.299719 | orchestrator | 2026-04-06 02:49:23.299730 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-06 02:49:23.299741 | orchestrator | Monday 06 April 2026 02:49:17 +0000 (0:00:00.234) 0:00:11.130 ********** 2026-04-06 02:49:23.299753 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-06 02:49:23.299764 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-06 02:49:23.299775 | orchestrator | 2026-04-06 02:49:23.299788 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-06 02:49:23.299801 | orchestrator | Monday 06 April 2026 02:49:17 +0000 (0:00:00.214) 0:00:11.344 ********** 2026-04-06 02:49:23.299814 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.299826 | orchestrator | 2026-04-06 02:49:23.299839 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-06 02:49:23.299853 | orchestrator | Monday 06 April 2026 02:49:17 +0000 (0:00:00.129) 0:00:11.474 ********** 2026-04-06 02:49:23.299886 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.299899 | orchestrator | 2026-04-06 02:49:23.299912 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-06 02:49:23.299925 | orchestrator | Monday 06 April 2026 02:49:17 +0000 (0:00:00.161) 0:00:11.635 ********** 2026-04-06 02:49:23.299937 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.299950 | orchestrator | 2026-04-06 02:49:23.299963 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-06 02:49:23.299975 | orchestrator | Monday 06 April 2026 02:49:17 +0000 (0:00:00.155) 0:00:11.791 ********** 2026-04-06 02:49:23.299988 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:49:23.300001 | orchestrator | 2026-04-06 02:49:23.300014 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-06 02:49:23.300026 | orchestrator | Monday 06 April 2026 02:49:17 +0000 (0:00:00.152) 0:00:11.943 ********** 2026-04-06 02:49:23.300039 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44d7a625-0d29-5597-9a0c-b91ce06f2e33'}}) 2026-04-06 02:49:23.300053 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '33ff4195-b9ae-565c-9501-f62265c8cf2c'}}) 2026-04-06 02:49:23.300066 | orchestrator | 2026-04-06 02:49:23.300079 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-06 02:49:23.300091 | orchestrator | Monday 06 April 2026 02:49:18 +0000 (0:00:00.178) 0:00:12.122 ********** 2026-04-06 02:49:23.300105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44d7a625-0d29-5597-9a0c-b91ce06f2e33'}})  2026-04-06 02:49:23.300121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '33ff4195-b9ae-565c-9501-f62265c8cf2c'}})  2026-04-06 02:49:23.300134 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.300145 | orchestrator | 2026-04-06 02:49:23.300185 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-06 02:49:23.300198 | orchestrator | Monday 06 April 2026 02:49:18 +0000 (0:00:00.397) 0:00:12.520 ********** 2026-04-06 02:49:23.300209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44d7a625-0d29-5597-9a0c-b91ce06f2e33'}})  2026-04-06 02:49:23.300221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '33ff4195-b9ae-565c-9501-f62265c8cf2c'}})  2026-04-06 02:49:23.300232 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.300243 | orchestrator | 2026-04-06 02:49:23.300253 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-06 02:49:23.300264 | orchestrator | Monday 06 April 2026 02:49:18 +0000 (0:00:00.169) 0:00:12.689 ********** 2026-04-06 02:49:23.300275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44d7a625-0d29-5597-9a0c-b91ce06f2e33'}})  2026-04-06 02:49:23.300306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '33ff4195-b9ae-565c-9501-f62265c8cf2c'}})  2026-04-06 02:49:23.300317 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.300328 | orchestrator | 2026-04-06 02:49:23.300340 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-06 02:49:23.300351 | orchestrator | Monday 06 April 2026 02:49:18 +0000 (0:00:00.171) 0:00:12.860 ********** 2026-04-06 02:49:23.300362 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:49:23.300373 | orchestrator | 2026-04-06 02:49:23.300384 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-06 02:49:23.300403 | orchestrator | Monday 06 April 2026 02:49:19 +0000 (0:00:00.160) 0:00:13.021 ********** 2026-04-06 02:49:23.300414 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:49:23.300425 | orchestrator | 2026-04-06 02:49:23.300436 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-06 02:49:23.300447 | orchestrator | Monday 06 April 2026 02:49:19 +0000 (0:00:00.162) 0:00:13.184 ********** 2026-04-06 02:49:23.300466 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.300478 | orchestrator | 2026-04-06 02:49:23.300489 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-06 02:49:23.300500 | orchestrator | Monday 06 April 2026 02:49:19 +0000 (0:00:00.158) 0:00:13.342 ********** 2026-04-06 02:49:23.300511 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.300522 | orchestrator | 2026-04-06 02:49:23.300533 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-06 02:49:23.300543 | orchestrator | Monday 06 April 2026 02:49:19 +0000 (0:00:00.136) 0:00:13.479 ********** 2026-04-06 02:49:23.300555 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.300565 | orchestrator | 2026-04-06 02:49:23.300576 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-06 02:49:23.300587 | orchestrator | Monday 06 April 2026 02:49:19 +0000 (0:00:00.152) 0:00:13.632 ********** 2026-04-06 02:49:23.300598 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 02:49:23.300609 | orchestrator |  "ceph_osd_devices": { 2026-04-06 02:49:23.300620 | orchestrator |  "sdb": { 2026-04-06 02:49:23.300631 | orchestrator |  "osd_lvm_uuid": "44d7a625-0d29-5597-9a0c-b91ce06f2e33" 2026-04-06 02:49:23.300642 | orchestrator |  }, 2026-04-06 02:49:23.300653 | orchestrator |  "sdc": { 2026-04-06 02:49:23.300664 | orchestrator |  "osd_lvm_uuid": "33ff4195-b9ae-565c-9501-f62265c8cf2c" 2026-04-06 02:49:23.300675 | orchestrator |  } 2026-04-06 02:49:23.300686 | orchestrator |  } 2026-04-06 02:49:23.300697 | orchestrator | } 2026-04-06 02:49:23.300708 | orchestrator | 2026-04-06 02:49:23.300719 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-06 02:49:23.300730 | orchestrator | Monday 06 April 2026 02:49:19 +0000 (0:00:00.172) 0:00:13.804 ********** 2026-04-06 02:49:23.300741 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.300752 | orchestrator | 2026-04-06 02:49:23.300762 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-06 02:49:23.300773 | orchestrator | Monday 06 April 2026 02:49:19 +0000 (0:00:00.147) 0:00:13.952 ********** 2026-04-06 02:49:23.300789 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.300808 | orchestrator | 2026-04-06 02:49:23.300826 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-06 02:49:23.300851 | orchestrator | Monday 06 April 2026 02:49:20 +0000 (0:00:00.168) 0:00:14.121 ********** 2026-04-06 02:49:23.300870 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:49:23.300888 | orchestrator | 2026-04-06 02:49:23.300906 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-06 02:49:23.300923 | orchestrator | Monday 06 April 2026 02:49:20 +0000 (0:00:00.155) 0:00:14.277 ********** 2026-04-06 02:49:23.300939 | orchestrator | changed: [testbed-node-3] => { 2026-04-06 02:49:23.300954 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-06 02:49:23.300969 | orchestrator |  "ceph_osd_devices": { 2026-04-06 02:49:23.300988 | orchestrator |  "sdb": { 2026-04-06 02:49:23.301005 | orchestrator |  "osd_lvm_uuid": "44d7a625-0d29-5597-9a0c-b91ce06f2e33" 2026-04-06 02:49:23.301024 | orchestrator |  }, 2026-04-06 02:49:23.301042 | orchestrator |  "sdc": { 2026-04-06 02:49:23.301061 | orchestrator |  "osd_lvm_uuid": "33ff4195-b9ae-565c-9501-f62265c8cf2c" 2026-04-06 02:49:23.301079 | orchestrator |  } 2026-04-06 02:49:23.301097 | orchestrator |  }, 2026-04-06 02:49:23.301109 | orchestrator |  "lvm_volumes": [ 2026-04-06 02:49:23.301120 | orchestrator |  { 2026-04-06 02:49:23.301131 | orchestrator |  "data": "osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33", 2026-04-06 02:49:23.301142 | orchestrator |  "data_vg": "ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33" 2026-04-06 02:49:23.301153 | orchestrator |  }, 2026-04-06 02:49:23.301223 | orchestrator |  { 2026-04-06 02:49:23.301233 | orchestrator |  "data": "osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c", 2026-04-06 02:49:23.301256 | orchestrator |  "data_vg": "ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c" 2026-04-06 02:49:23.301267 | orchestrator |  } 2026-04-06 02:49:23.301278 | orchestrator |  ] 2026-04-06 02:49:23.301288 | orchestrator |  } 2026-04-06 02:49:23.301299 | orchestrator | } 2026-04-06 02:49:23.301310 | orchestrator | 2026-04-06 02:49:23.301321 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-06 02:49:23.301332 | orchestrator | Monday 06 April 2026 02:49:20 +0000 (0:00:00.480) 0:00:14.757 ********** 2026-04-06 02:49:23.301342 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 02:49:23.301353 | orchestrator | 2026-04-06 02:49:23.301364 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-06 02:49:23.301375 | orchestrator | 2026-04-06 02:49:23.301385 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-06 02:49:23.301396 | orchestrator | Monday 06 April 2026 02:49:22 +0000 (0:00:02.006) 0:00:16.764 ********** 2026-04-06 02:49:23.301406 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-06 02:49:23.301417 | orchestrator | 2026-04-06 02:49:23.301428 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-06 02:49:23.301438 | orchestrator | Monday 06 April 2026 02:49:23 +0000 (0:00:00.282) 0:00:17.046 ********** 2026-04-06 02:49:23.301449 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:49:23.301460 | orchestrator | 2026-04-06 02:49:23.301482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.326669 | orchestrator | Monday 06 April 2026 02:49:23 +0000 (0:00:00.263) 0:00:17.310 ********** 2026-04-06 02:49:32.326754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-06 02:49:32.326763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-06 02:49:32.326769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-06 02:49:32.326787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-06 02:49:32.326792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-06 02:49:32.326798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-06 02:49:32.326803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-06 02:49:32.326808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-06 02:49:32.326814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-06 02:49:32.326819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-06 02:49:32.326824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-06 02:49:32.326829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-06 02:49:32.326835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-06 02:49:32.326840 | orchestrator | 2026-04-06 02:49:32.326846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.326851 | orchestrator | Monday 06 April 2026 02:49:23 +0000 (0:00:00.412) 0:00:17.723 ********** 2026-04-06 02:49:32.326857 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.326863 | orchestrator | 2026-04-06 02:49:32.326868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.326873 | orchestrator | Monday 06 April 2026 02:49:23 +0000 (0:00:00.256) 0:00:17.979 ********** 2026-04-06 02:49:32.326878 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.326884 | orchestrator | 2026-04-06 02:49:32.326889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.326894 | orchestrator | Monday 06 April 2026 02:49:24 +0000 (0:00:00.226) 0:00:18.206 ********** 2026-04-06 02:49:32.326913 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.326918 | orchestrator | 2026-04-06 02:49:32.326923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.326929 | orchestrator | Monday 06 April 2026 02:49:24 +0000 (0:00:00.231) 0:00:18.437 ********** 2026-04-06 02:49:32.326934 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.326939 | orchestrator | 2026-04-06 02:49:32.326944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.326949 | orchestrator | Monday 06 April 2026 02:49:25 +0000 (0:00:00.737) 0:00:19.175 ********** 2026-04-06 02:49:32.326955 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.326960 | orchestrator | 2026-04-06 02:49:32.326965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.326970 | orchestrator | Monday 06 April 2026 02:49:25 +0000 (0:00:00.247) 0:00:19.423 ********** 2026-04-06 02:49:32.326975 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.326981 | orchestrator | 2026-04-06 02:49:32.326986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.326991 | orchestrator | Monday 06 April 2026 02:49:25 +0000 (0:00:00.232) 0:00:19.655 ********** 2026-04-06 02:49:32.326996 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.327001 | orchestrator | 2026-04-06 02:49:32.327006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.327011 | orchestrator | Monday 06 April 2026 02:49:25 +0000 (0:00:00.214) 0:00:19.870 ********** 2026-04-06 02:49:32.327017 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.327022 | orchestrator | 2026-04-06 02:49:32.327027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.327032 | orchestrator | Monday 06 April 2026 02:49:26 +0000 (0:00:00.223) 0:00:20.094 ********** 2026-04-06 02:49:32.327037 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336) 2026-04-06 02:49:32.327044 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336) 2026-04-06 02:49:32.327049 | orchestrator | 2026-04-06 02:49:32.327054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.327059 | orchestrator | Monday 06 April 2026 02:49:26 +0000 (0:00:00.480) 0:00:20.574 ********** 2026-04-06 02:49:32.327065 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554) 2026-04-06 02:49:32.327070 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554) 2026-04-06 02:49:32.327075 | orchestrator | 2026-04-06 02:49:32.327080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.327085 | orchestrator | Monday 06 April 2026 02:49:27 +0000 (0:00:00.466) 0:00:21.040 ********** 2026-04-06 02:49:32.327090 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867) 2026-04-06 02:49:32.327096 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867) 2026-04-06 02:49:32.327101 | orchestrator | 2026-04-06 02:49:32.327106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.327122 | orchestrator | Monday 06 April 2026 02:49:27 +0000 (0:00:00.486) 0:00:21.527 ********** 2026-04-06 02:49:32.327128 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de) 2026-04-06 02:49:32.327133 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de) 2026-04-06 02:49:32.327138 | orchestrator | 2026-04-06 02:49:32.327144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:32.327153 | orchestrator | Monday 06 April 2026 02:49:27 +0000 (0:00:00.491) 0:00:22.018 ********** 2026-04-06 02:49:32.327158 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-06 02:49:32.327228 | orchestrator | 2026-04-06 02:49:32.327235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327241 | orchestrator | Monday 06 April 2026 02:49:28 +0000 (0:00:00.372) 0:00:22.390 ********** 2026-04-06 02:49:32.327247 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-06 02:49:32.327254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-06 02:49:32.327259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-06 02:49:32.327266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-06 02:49:32.327271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-06 02:49:32.327277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-06 02:49:32.327283 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-06 02:49:32.327289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-06 02:49:32.327295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-06 02:49:32.327300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-06 02:49:32.327307 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-06 02:49:32.327312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-06 02:49:32.327318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-06 02:49:32.327325 | orchestrator | 2026-04-06 02:49:32.327330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327337 | orchestrator | Monday 06 April 2026 02:49:28 +0000 (0:00:00.420) 0:00:22.810 ********** 2026-04-06 02:49:32.327342 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.327348 | orchestrator | 2026-04-06 02:49:32.327354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327360 | orchestrator | Monday 06 April 2026 02:49:29 +0000 (0:00:00.748) 0:00:23.558 ********** 2026-04-06 02:49:32.327366 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.327372 | orchestrator | 2026-04-06 02:49:32.327378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327384 | orchestrator | Monday 06 April 2026 02:49:29 +0000 (0:00:00.219) 0:00:23.777 ********** 2026-04-06 02:49:32.327390 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.327396 | orchestrator | 2026-04-06 02:49:32.327402 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327407 | orchestrator | Monday 06 April 2026 02:49:29 +0000 (0:00:00.211) 0:00:23.989 ********** 2026-04-06 02:49:32.327413 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.327419 | orchestrator | 2026-04-06 02:49:32.327425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327431 | orchestrator | Monday 06 April 2026 02:49:30 +0000 (0:00:00.216) 0:00:24.206 ********** 2026-04-06 02:49:32.327436 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.327442 | orchestrator | 2026-04-06 02:49:32.327448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327454 | orchestrator | Monday 06 April 2026 02:49:30 +0000 (0:00:00.218) 0:00:24.424 ********** 2026-04-06 02:49:32.327460 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.327466 | orchestrator | 2026-04-06 02:49:32.327472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327478 | orchestrator | Monday 06 April 2026 02:49:30 +0000 (0:00:00.255) 0:00:24.680 ********** 2026-04-06 02:49:32.327489 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.327495 | orchestrator | 2026-04-06 02:49:32.327501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327506 | orchestrator | Monday 06 April 2026 02:49:30 +0000 (0:00:00.224) 0:00:24.905 ********** 2026-04-06 02:49:32.327512 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:32.327517 | orchestrator | 2026-04-06 02:49:32.327522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327527 | orchestrator | Monday 06 April 2026 02:49:31 +0000 (0:00:00.225) 0:00:25.130 ********** 2026-04-06 02:49:32.327532 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-06 02:49:32.327538 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-06 02:49:32.327544 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-06 02:49:32.327549 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-06 02:49:32.327554 | orchestrator | 2026-04-06 02:49:32.327559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:32.327565 | orchestrator | Monday 06 April 2026 02:49:32 +0000 (0:00:00.999) 0:00:26.130 ********** 2026-04-06 02:49:32.327570 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.730598 | orchestrator | 2026-04-06 02:49:39.730740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:39.730766 | orchestrator | Monday 06 April 2026 02:49:32 +0000 (0:00:00.210) 0:00:26.340 ********** 2026-04-06 02:49:39.730785 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.730797 | orchestrator | 2026-04-06 02:49:39.730807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:39.730818 | orchestrator | Monday 06 April 2026 02:49:32 +0000 (0:00:00.244) 0:00:26.584 ********** 2026-04-06 02:49:39.730846 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.730861 | orchestrator | 2026-04-06 02:49:39.730876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:39.730893 | orchestrator | Monday 06 April 2026 02:49:33 +0000 (0:00:00.833) 0:00:27.417 ********** 2026-04-06 02:49:39.730910 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.730926 | orchestrator | 2026-04-06 02:49:39.730943 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-06 02:49:39.730956 | orchestrator | Monday 06 April 2026 02:49:33 +0000 (0:00:00.211) 0:00:27.629 ********** 2026-04-06 02:49:39.730966 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-06 02:49:39.730976 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-06 02:49:39.730986 | orchestrator | 2026-04-06 02:49:39.730996 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-06 02:49:39.731005 | orchestrator | Monday 06 April 2026 02:49:33 +0000 (0:00:00.242) 0:00:27.871 ********** 2026-04-06 02:49:39.731014 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.731024 | orchestrator | 2026-04-06 02:49:39.731034 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-06 02:49:39.731043 | orchestrator | Monday 06 April 2026 02:49:34 +0000 (0:00:00.158) 0:00:28.029 ********** 2026-04-06 02:49:39.731053 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.731062 | orchestrator | 2026-04-06 02:49:39.731072 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-06 02:49:39.731083 | orchestrator | Monday 06 April 2026 02:49:34 +0000 (0:00:00.156) 0:00:28.186 ********** 2026-04-06 02:49:39.731095 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.731106 | orchestrator | 2026-04-06 02:49:39.731118 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-06 02:49:39.731135 | orchestrator | Monday 06 April 2026 02:49:34 +0000 (0:00:00.152) 0:00:28.339 ********** 2026-04-06 02:49:39.731151 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:49:39.731234 | orchestrator | 2026-04-06 02:49:39.731255 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-06 02:49:39.731272 | orchestrator | Monday 06 April 2026 02:49:34 +0000 (0:00:00.152) 0:00:28.491 ********** 2026-04-06 02:49:39.731317 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}}) 2026-04-06 02:49:39.731336 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c307d7c-3927-5061-a8a8-155bb148bb1a'}}) 2026-04-06 02:49:39.731353 | orchestrator | 2026-04-06 02:49:39.731371 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-06 02:49:39.731389 | orchestrator | Monday 06 April 2026 02:49:34 +0000 (0:00:00.197) 0:00:28.689 ********** 2026-04-06 02:49:39.731407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}})  2026-04-06 02:49:39.731426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c307d7c-3927-5061-a8a8-155bb148bb1a'}})  2026-04-06 02:49:39.731441 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.731453 | orchestrator | 2026-04-06 02:49:39.731463 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-06 02:49:39.731473 | orchestrator | Monday 06 April 2026 02:49:34 +0000 (0:00:00.162) 0:00:28.852 ********** 2026-04-06 02:49:39.731483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}})  2026-04-06 02:49:39.731493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c307d7c-3927-5061-a8a8-155bb148bb1a'}})  2026-04-06 02:49:39.731502 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.731512 | orchestrator | 2026-04-06 02:49:39.731521 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-06 02:49:39.731531 | orchestrator | Monday 06 April 2026 02:49:35 +0000 (0:00:00.180) 0:00:29.033 ********** 2026-04-06 02:49:39.731540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}})  2026-04-06 02:49:39.731550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c307d7c-3927-5061-a8a8-155bb148bb1a'}})  2026-04-06 02:49:39.731560 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.731569 | orchestrator | 2026-04-06 02:49:39.731579 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-06 02:49:39.731588 | orchestrator | Monday 06 April 2026 02:49:35 +0000 (0:00:00.186) 0:00:29.219 ********** 2026-04-06 02:49:39.731598 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:49:39.731608 | orchestrator | 2026-04-06 02:49:39.731617 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-06 02:49:39.731627 | orchestrator | Monday 06 April 2026 02:49:35 +0000 (0:00:00.158) 0:00:29.377 ********** 2026-04-06 02:49:39.731637 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:49:39.731646 | orchestrator | 2026-04-06 02:49:39.731656 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-06 02:49:39.731666 | orchestrator | Monday 06 April 2026 02:49:35 +0000 (0:00:00.154) 0:00:29.531 ********** 2026-04-06 02:49:39.731699 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.731709 | orchestrator | 2026-04-06 02:49:39.731719 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-06 02:49:39.731729 | orchestrator | Monday 06 April 2026 02:49:35 +0000 (0:00:00.397) 0:00:29.929 ********** 2026-04-06 02:49:39.731738 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.731748 | orchestrator | 2026-04-06 02:49:39.731769 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-06 02:49:39.731779 | orchestrator | Monday 06 April 2026 02:49:36 +0000 (0:00:00.145) 0:00:30.074 ********** 2026-04-06 02:49:39.731800 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.731810 | orchestrator | 2026-04-06 02:49:39.731820 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-06 02:49:39.731830 | orchestrator | Monday 06 April 2026 02:49:36 +0000 (0:00:00.140) 0:00:30.215 ********** 2026-04-06 02:49:39.731850 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 02:49:39.731860 | orchestrator |  "ceph_osd_devices": { 2026-04-06 02:49:39.731870 | orchestrator |  "sdb": { 2026-04-06 02:49:39.731880 | orchestrator |  "osd_lvm_uuid": "c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3" 2026-04-06 02:49:39.731890 | orchestrator |  }, 2026-04-06 02:49:39.731900 | orchestrator |  "sdc": { 2026-04-06 02:49:39.731909 | orchestrator |  "osd_lvm_uuid": "8c307d7c-3927-5061-a8a8-155bb148bb1a" 2026-04-06 02:49:39.731919 | orchestrator |  } 2026-04-06 02:49:39.731929 | orchestrator |  } 2026-04-06 02:49:39.731939 | orchestrator | } 2026-04-06 02:49:39.731949 | orchestrator | 2026-04-06 02:49:39.731958 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-06 02:49:39.731968 | orchestrator | Monday 06 April 2026 02:49:36 +0000 (0:00:00.167) 0:00:30.383 ********** 2026-04-06 02:49:39.731978 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.731988 | orchestrator | 2026-04-06 02:49:39.731998 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-06 02:49:39.732007 | orchestrator | Monday 06 April 2026 02:49:36 +0000 (0:00:00.170) 0:00:30.554 ********** 2026-04-06 02:49:39.732017 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.732027 | orchestrator | 2026-04-06 02:49:39.732036 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-06 02:49:39.732046 | orchestrator | Monday 06 April 2026 02:49:36 +0000 (0:00:00.152) 0:00:30.706 ********** 2026-04-06 02:49:39.732056 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:49:39.732065 | orchestrator | 2026-04-06 02:49:39.732075 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-06 02:49:39.732085 | orchestrator | Monday 06 April 2026 02:49:36 +0000 (0:00:00.141) 0:00:30.847 ********** 2026-04-06 02:49:39.732094 | orchestrator | changed: [testbed-node-4] => { 2026-04-06 02:49:39.732104 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-06 02:49:39.732114 | orchestrator |  "ceph_osd_devices": { 2026-04-06 02:49:39.732123 | orchestrator |  "sdb": { 2026-04-06 02:49:39.732133 | orchestrator |  "osd_lvm_uuid": "c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3" 2026-04-06 02:49:39.732143 | orchestrator |  }, 2026-04-06 02:49:39.732153 | orchestrator |  "sdc": { 2026-04-06 02:49:39.732163 | orchestrator |  "osd_lvm_uuid": "8c307d7c-3927-5061-a8a8-155bb148bb1a" 2026-04-06 02:49:39.732202 | orchestrator |  } 2026-04-06 02:49:39.732212 | orchestrator |  }, 2026-04-06 02:49:39.732222 | orchestrator |  "lvm_volumes": [ 2026-04-06 02:49:39.732232 | orchestrator |  { 2026-04-06 02:49:39.732241 | orchestrator |  "data": "osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3", 2026-04-06 02:49:39.732251 | orchestrator |  "data_vg": "ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3" 2026-04-06 02:49:39.732261 | orchestrator |  }, 2026-04-06 02:49:39.732270 | orchestrator |  { 2026-04-06 02:49:39.732280 | orchestrator |  "data": "osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a", 2026-04-06 02:49:39.732289 | orchestrator |  "data_vg": "ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a" 2026-04-06 02:49:39.732299 | orchestrator |  } 2026-04-06 02:49:39.732309 | orchestrator |  ] 2026-04-06 02:49:39.732318 | orchestrator |  } 2026-04-06 02:49:39.732335 | orchestrator | } 2026-04-06 02:49:39.732351 | orchestrator | 2026-04-06 02:49:39.732367 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-06 02:49:39.732384 | orchestrator | Monday 06 April 2026 02:49:37 +0000 (0:00:00.231) 0:00:31.079 ********** 2026-04-06 02:49:39.732398 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-06 02:49:39.732414 | orchestrator | 2026-04-06 02:49:39.732431 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-06 02:49:39.732449 | orchestrator | 2026-04-06 02:49:39.732464 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-06 02:49:39.732490 | orchestrator | Monday 06 April 2026 02:49:38 +0000 (0:00:01.552) 0:00:32.632 ********** 2026-04-06 02:49:39.732507 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-06 02:49:39.732523 | orchestrator | 2026-04-06 02:49:39.732539 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-06 02:49:39.732556 | orchestrator | Monday 06 April 2026 02:49:38 +0000 (0:00:00.351) 0:00:32.984 ********** 2026-04-06 02:49:39.732573 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:49:39.732588 | orchestrator | 2026-04-06 02:49:39.732604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:39.732621 | orchestrator | Monday 06 April 2026 02:49:39 +0000 (0:00:00.284) 0:00:33.268 ********** 2026-04-06 02:49:39.732638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-06 02:49:39.732654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-06 02:49:39.732670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-06 02:49:39.732686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-06 02:49:39.732701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-06 02:49:39.732729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-06 02:49:49.408894 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-06 02:49:49.409002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-06 02:49:49.409019 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-06 02:49:49.409045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-06 02:49:49.409057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-06 02:49:49.409068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-06 02:49:49.409077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-06 02:49:49.409088 | orchestrator | 2026-04-06 02:49:49.409101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409113 | orchestrator | Monday 06 April 2026 02:49:39 +0000 (0:00:00.467) 0:00:33.736 ********** 2026-04-06 02:49:49.409125 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409137 | orchestrator | 2026-04-06 02:49:49.409148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409159 | orchestrator | Monday 06 April 2026 02:49:39 +0000 (0:00:00.228) 0:00:33.965 ********** 2026-04-06 02:49:49.409171 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409264 | orchestrator | 2026-04-06 02:49:49.409278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409289 | orchestrator | Monday 06 April 2026 02:49:40 +0000 (0:00:00.225) 0:00:34.190 ********** 2026-04-06 02:49:49.409301 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409314 | orchestrator | 2026-04-06 02:49:49.409327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409339 | orchestrator | Monday 06 April 2026 02:49:40 +0000 (0:00:00.229) 0:00:34.420 ********** 2026-04-06 02:49:49.409351 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409363 | orchestrator | 2026-04-06 02:49:49.409374 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409382 | orchestrator | Monday 06 April 2026 02:49:40 +0000 (0:00:00.218) 0:00:34.638 ********** 2026-04-06 02:49:49.409390 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409397 | orchestrator | 2026-04-06 02:49:49.409404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409411 | orchestrator | Monday 06 April 2026 02:49:40 +0000 (0:00:00.235) 0:00:34.874 ********** 2026-04-06 02:49:49.409437 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409445 | orchestrator | 2026-04-06 02:49:49.409453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409462 | orchestrator | Monday 06 April 2026 02:49:41 +0000 (0:00:00.294) 0:00:35.168 ********** 2026-04-06 02:49:49.409470 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409477 | orchestrator | 2026-04-06 02:49:49.409486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409493 | orchestrator | Monday 06 April 2026 02:49:41 +0000 (0:00:00.790) 0:00:35.958 ********** 2026-04-06 02:49:49.409501 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409509 | orchestrator | 2026-04-06 02:49:49.409517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409524 | orchestrator | Monday 06 April 2026 02:49:42 +0000 (0:00:00.245) 0:00:36.204 ********** 2026-04-06 02:49:49.409533 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8) 2026-04-06 02:49:49.409542 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8) 2026-04-06 02:49:49.409549 | orchestrator | 2026-04-06 02:49:49.409556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409562 | orchestrator | Monday 06 April 2026 02:49:42 +0000 (0:00:00.522) 0:00:36.726 ********** 2026-04-06 02:49:49.409569 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d) 2026-04-06 02:49:49.409576 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d) 2026-04-06 02:49:49.409583 | orchestrator | 2026-04-06 02:49:49.409590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409596 | orchestrator | Monday 06 April 2026 02:49:43 +0000 (0:00:00.555) 0:00:37.282 ********** 2026-04-06 02:49:49.409603 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0) 2026-04-06 02:49:49.409609 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0) 2026-04-06 02:49:49.409616 | orchestrator | 2026-04-06 02:49:49.409623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409630 | orchestrator | Monday 06 April 2026 02:49:43 +0000 (0:00:00.536) 0:00:37.819 ********** 2026-04-06 02:49:49.409637 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c) 2026-04-06 02:49:49.409644 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c) 2026-04-06 02:49:49.409651 | orchestrator | 2026-04-06 02:49:49.409657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:49:49.409664 | orchestrator | Monday 06 April 2026 02:49:44 +0000 (0:00:00.543) 0:00:38.362 ********** 2026-04-06 02:49:49.409671 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-06 02:49:49.409677 | orchestrator | 2026-04-06 02:49:49.409684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.409708 | orchestrator | Monday 06 April 2026 02:49:44 +0000 (0:00:00.418) 0:00:38.781 ********** 2026-04-06 02:49:49.409715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-06 02:49:49.409722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-06 02:49:49.409730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-06 02:49:49.409753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-06 02:49:49.409768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-06 02:49:49.409782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-06 02:49:49.409799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-06 02:49:49.409810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-06 02:49:49.409820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-06 02:49:49.409830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-06 02:49:49.409839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-06 02:49:49.409849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-06 02:49:49.409859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-06 02:49:49.409869 | orchestrator | 2026-04-06 02:49:49.409879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.409889 | orchestrator | Monday 06 April 2026 02:49:45 +0000 (0:00:00.479) 0:00:39.260 ********** 2026-04-06 02:49:49.409900 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409911 | orchestrator | 2026-04-06 02:49:49.409923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.409934 | orchestrator | Monday 06 April 2026 02:49:45 +0000 (0:00:00.257) 0:00:39.518 ********** 2026-04-06 02:49:49.409945 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409955 | orchestrator | 2026-04-06 02:49:49.409966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.409978 | orchestrator | Monday 06 April 2026 02:49:45 +0000 (0:00:00.223) 0:00:39.742 ********** 2026-04-06 02:49:49.409985 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.409992 | orchestrator | 2026-04-06 02:49:49.409998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.410005 | orchestrator | Monday 06 April 2026 02:49:46 +0000 (0:00:00.774) 0:00:40.516 ********** 2026-04-06 02:49:49.410012 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.410067 | orchestrator | 2026-04-06 02:49:49.410074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.410081 | orchestrator | Monday 06 April 2026 02:49:46 +0000 (0:00:00.245) 0:00:40.762 ********** 2026-04-06 02:49:49.410088 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.410095 | orchestrator | 2026-04-06 02:49:49.410102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.410108 | orchestrator | Monday 06 April 2026 02:49:46 +0000 (0:00:00.239) 0:00:41.002 ********** 2026-04-06 02:49:49.410115 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.410122 | orchestrator | 2026-04-06 02:49:49.410128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.410135 | orchestrator | Monday 06 April 2026 02:49:47 +0000 (0:00:00.251) 0:00:41.254 ********** 2026-04-06 02:49:49.410142 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.410148 | orchestrator | 2026-04-06 02:49:49.410155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.410162 | orchestrator | Monday 06 April 2026 02:49:47 +0000 (0:00:00.245) 0:00:41.499 ********** 2026-04-06 02:49:49.410168 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.410198 | orchestrator | 2026-04-06 02:49:49.410205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.410212 | orchestrator | Monday 06 April 2026 02:49:47 +0000 (0:00:00.228) 0:00:41.727 ********** 2026-04-06 02:49:49.410219 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-06 02:49:49.410225 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-06 02:49:49.410233 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-06 02:49:49.410239 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-06 02:49:49.410246 | orchestrator | 2026-04-06 02:49:49.410260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.410267 | orchestrator | Monday 06 April 2026 02:49:48 +0000 (0:00:00.770) 0:00:42.498 ********** 2026-04-06 02:49:49.410274 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.410281 | orchestrator | 2026-04-06 02:49:49.410287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.410294 | orchestrator | Monday 06 April 2026 02:49:48 +0000 (0:00:00.243) 0:00:42.742 ********** 2026-04-06 02:49:49.410301 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.410307 | orchestrator | 2026-04-06 02:49:49.410314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.410321 | orchestrator | Monday 06 April 2026 02:49:48 +0000 (0:00:00.231) 0:00:42.973 ********** 2026-04-06 02:49:49.410328 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.410334 | orchestrator | 2026-04-06 02:49:49.410341 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:49:49.410348 | orchestrator | Monday 06 April 2026 02:49:49 +0000 (0:00:00.223) 0:00:43.196 ********** 2026-04-06 02:49:49.410355 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:49.410361 | orchestrator | 2026-04-06 02:49:49.410375 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-06 02:49:54.190705 | orchestrator | Monday 06 April 2026 02:49:49 +0000 (0:00:00.224) 0:00:43.421 ********** 2026-04-06 02:49:54.191493 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-06 02:49:54.191529 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-06 02:49:54.191536 | orchestrator | 2026-04-06 02:49:54.191544 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-06 02:49:54.191568 | orchestrator | Monday 06 April 2026 02:49:49 +0000 (0:00:00.454) 0:00:43.875 ********** 2026-04-06 02:49:54.191576 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.191583 | orchestrator | 2026-04-06 02:49:54.191589 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-06 02:49:54.191596 | orchestrator | Monday 06 April 2026 02:49:50 +0000 (0:00:00.162) 0:00:44.038 ********** 2026-04-06 02:49:54.191602 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.191610 | orchestrator | 2026-04-06 02:49:54.191616 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-06 02:49:54.191622 | orchestrator | Monday 06 April 2026 02:49:50 +0000 (0:00:00.167) 0:00:44.206 ********** 2026-04-06 02:49:54.191629 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.191634 | orchestrator | 2026-04-06 02:49:54.191640 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-06 02:49:54.191646 | orchestrator | Monday 06 April 2026 02:49:50 +0000 (0:00:00.159) 0:00:44.365 ********** 2026-04-06 02:49:54.191652 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:49:54.191658 | orchestrator | 2026-04-06 02:49:54.191664 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-06 02:49:54.191669 | orchestrator | Monday 06 April 2026 02:49:50 +0000 (0:00:00.155) 0:00:44.520 ********** 2026-04-06 02:49:54.191676 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}}) 2026-04-06 02:49:54.191682 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4d79f264-f564-5244-b3d4-1e30cd615742'}}) 2026-04-06 02:49:54.191688 | orchestrator | 2026-04-06 02:49:54.191693 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-06 02:49:54.191698 | orchestrator | Monday 06 April 2026 02:49:50 +0000 (0:00:00.182) 0:00:44.703 ********** 2026-04-06 02:49:54.191705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}})  2026-04-06 02:49:54.191714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4d79f264-f564-5244-b3d4-1e30cd615742'}})  2026-04-06 02:49:54.191736 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.191742 | orchestrator | 2026-04-06 02:49:54.191747 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-06 02:49:54.191753 | orchestrator | Monday 06 April 2026 02:49:50 +0000 (0:00:00.174) 0:00:44.878 ********** 2026-04-06 02:49:54.191759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}})  2026-04-06 02:49:54.191765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4d79f264-f564-5244-b3d4-1e30cd615742'}})  2026-04-06 02:49:54.191771 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.191776 | orchestrator | 2026-04-06 02:49:54.191781 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-06 02:49:54.191787 | orchestrator | Monday 06 April 2026 02:49:51 +0000 (0:00:00.190) 0:00:45.068 ********** 2026-04-06 02:49:54.191792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}})  2026-04-06 02:49:54.191797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4d79f264-f564-5244-b3d4-1e30cd615742'}})  2026-04-06 02:49:54.191803 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.191808 | orchestrator | 2026-04-06 02:49:54.191813 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-06 02:49:54.191821 | orchestrator | Monday 06 April 2026 02:49:51 +0000 (0:00:00.170) 0:00:45.238 ********** 2026-04-06 02:49:54.191826 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:49:54.191832 | orchestrator | 2026-04-06 02:49:54.191838 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-06 02:49:54.191843 | orchestrator | Monday 06 April 2026 02:49:51 +0000 (0:00:00.168) 0:00:45.407 ********** 2026-04-06 02:49:54.191848 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:49:54.191854 | orchestrator | 2026-04-06 02:49:54.191859 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-06 02:49:54.191866 | orchestrator | Monday 06 April 2026 02:49:51 +0000 (0:00:00.154) 0:00:45.562 ********** 2026-04-06 02:49:54.191871 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.191877 | orchestrator | 2026-04-06 02:49:54.191885 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-06 02:49:54.191891 | orchestrator | Monday 06 April 2026 02:49:51 +0000 (0:00:00.422) 0:00:45.984 ********** 2026-04-06 02:49:54.191897 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.191903 | orchestrator | 2026-04-06 02:49:54.191909 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-06 02:49:54.191914 | orchestrator | Monday 06 April 2026 02:49:52 +0000 (0:00:00.147) 0:00:46.132 ********** 2026-04-06 02:49:54.191920 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.191926 | orchestrator | 2026-04-06 02:49:54.191932 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-06 02:49:54.191937 | orchestrator | Monday 06 April 2026 02:49:52 +0000 (0:00:00.135) 0:00:46.267 ********** 2026-04-06 02:49:54.191943 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 02:49:54.191948 | orchestrator |  "ceph_osd_devices": { 2026-04-06 02:49:54.191953 | orchestrator |  "sdb": { 2026-04-06 02:49:54.191980 | orchestrator |  "osd_lvm_uuid": "fcd584d6-c8ff-5eaf-81cc-26105cfb5447" 2026-04-06 02:49:54.191986 | orchestrator |  }, 2026-04-06 02:49:54.191992 | orchestrator |  "sdc": { 2026-04-06 02:49:54.191998 | orchestrator |  "osd_lvm_uuid": "4d79f264-f564-5244-b3d4-1e30cd615742" 2026-04-06 02:49:54.192004 | orchestrator |  } 2026-04-06 02:49:54.192010 | orchestrator |  } 2026-04-06 02:49:54.192015 | orchestrator | } 2026-04-06 02:49:54.192021 | orchestrator | 2026-04-06 02:49:54.192034 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-06 02:49:54.192040 | orchestrator | Monday 06 April 2026 02:49:52 +0000 (0:00:00.143) 0:00:46.411 ********** 2026-04-06 02:49:54.192046 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.192060 | orchestrator | 2026-04-06 02:49:54.192066 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-06 02:49:54.192072 | orchestrator | Monday 06 April 2026 02:49:52 +0000 (0:00:00.165) 0:00:46.576 ********** 2026-04-06 02:49:54.192078 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.192084 | orchestrator | 2026-04-06 02:49:54.192091 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-06 02:49:54.192097 | orchestrator | Monday 06 April 2026 02:49:52 +0000 (0:00:00.155) 0:00:46.732 ********** 2026-04-06 02:49:54.192103 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:49:54.192108 | orchestrator | 2026-04-06 02:49:54.192114 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-06 02:49:54.192120 | orchestrator | Monday 06 April 2026 02:49:52 +0000 (0:00:00.144) 0:00:46.876 ********** 2026-04-06 02:49:54.192126 | orchestrator | changed: [testbed-node-5] => { 2026-04-06 02:49:54.192132 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-06 02:49:54.192137 | orchestrator |  "ceph_osd_devices": { 2026-04-06 02:49:54.192143 | orchestrator |  "sdb": { 2026-04-06 02:49:54.192149 | orchestrator |  "osd_lvm_uuid": "fcd584d6-c8ff-5eaf-81cc-26105cfb5447" 2026-04-06 02:49:54.192155 | orchestrator |  }, 2026-04-06 02:49:54.192161 | orchestrator |  "sdc": { 2026-04-06 02:49:54.192167 | orchestrator |  "osd_lvm_uuid": "4d79f264-f564-5244-b3d4-1e30cd615742" 2026-04-06 02:49:54.192173 | orchestrator |  } 2026-04-06 02:49:54.192205 | orchestrator |  }, 2026-04-06 02:49:54.192212 | orchestrator |  "lvm_volumes": [ 2026-04-06 02:49:54.192218 | orchestrator |  { 2026-04-06 02:49:54.192224 | orchestrator |  "data": "osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447", 2026-04-06 02:49:54.192230 | orchestrator |  "data_vg": "ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447" 2026-04-06 02:49:54.192236 | orchestrator |  }, 2026-04-06 02:49:54.192241 | orchestrator |  { 2026-04-06 02:49:54.192247 | orchestrator |  "data": "osd-block-4d79f264-f564-5244-b3d4-1e30cd615742", 2026-04-06 02:49:54.192252 | orchestrator |  "data_vg": "ceph-4d79f264-f564-5244-b3d4-1e30cd615742" 2026-04-06 02:49:54.192259 | orchestrator |  } 2026-04-06 02:49:54.192264 | orchestrator |  ] 2026-04-06 02:49:54.192271 | orchestrator |  } 2026-04-06 02:49:54.192277 | orchestrator | } 2026-04-06 02:49:54.192282 | orchestrator | 2026-04-06 02:49:54.192288 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-06 02:49:54.192293 | orchestrator | Monday 06 April 2026 02:49:53 +0000 (0:00:00.248) 0:00:47.125 ********** 2026-04-06 02:49:54.192300 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-06 02:49:54.192305 | orchestrator | 2026-04-06 02:49:54.192311 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:49:54.192317 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-06 02:49:54.192325 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-06 02:49:54.192330 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-06 02:49:54.192336 | orchestrator | 2026-04-06 02:49:54.192342 | orchestrator | 2026-04-06 02:49:54.192347 | orchestrator | 2026-04-06 02:49:54.192353 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:49:54.192361 | orchestrator | Monday 06 April 2026 02:49:54 +0000 (0:00:01.066) 0:00:48.191 ********** 2026-04-06 02:49:54.192366 | orchestrator | =============================================================================== 2026-04-06 02:49:54.192372 | orchestrator | Write configuration file ------------------------------------------------ 4.63s 2026-04-06 02:49:54.192385 | orchestrator | Add known links to the list of available block devices ------------------ 1.43s 2026-04-06 02:49:54.192391 | orchestrator | Add known partitions to the list of available block devices ------------- 1.33s 2026-04-06 02:49:54.192397 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2026-04-06 02:49:54.192402 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-04-06 02:49:54.192408 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-04-06 02:49:54.192413 | orchestrator | Set DB devices config data ---------------------------------------------- 0.98s 2026-04-06 02:49:54.192419 | orchestrator | Print configuration data ------------------------------------------------ 0.96s 2026-04-06 02:49:54.192425 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.93s 2026-04-06 02:49:54.192431 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.91s 2026-04-06 02:49:54.192436 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2026-04-06 02:49:54.192442 | orchestrator | Get initial list of available block devices ----------------------------- 0.82s 2026-04-06 02:49:54.192448 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-04-06 02:49:54.192463 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-04-06 02:49:54.724106 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-04-06 02:49:54.724337 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-04-06 02:49:54.724376 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-04-06 02:49:54.724418 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-04-06 02:49:54.724437 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.73s 2026-04-06 02:49:54.724453 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-04-06 02:50:17.617584 | orchestrator | 2026-04-06 02:50:17 | INFO  | Task 45dced23-502f-414e-9f46-72ef8bbc1811 (sync inventory) is running in background. Output coming soon. 2026-04-06 02:50:52.067764 | orchestrator | 2026-04-06 02:50:19 | INFO  | Starting group_vars file reorganization 2026-04-06 02:50:52.068580 | orchestrator | 2026-04-06 02:50:19 | INFO  | Moved 0 file(s) to their respective directories 2026-04-06 02:50:52.068625 | orchestrator | 2026-04-06 02:50:19 | INFO  | Group_vars file reorganization completed 2026-04-06 02:50:52.068636 | orchestrator | 2026-04-06 02:50:23 | INFO  | Starting variable preparation from inventory 2026-04-06 02:50:52.068648 | orchestrator | 2026-04-06 02:50:26 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-06 02:50:52.068659 | orchestrator | 2026-04-06 02:50:26 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-06 02:50:52.068668 | orchestrator | 2026-04-06 02:50:26 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-06 02:50:52.068678 | orchestrator | 2026-04-06 02:50:26 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-06 02:50:52.068688 | orchestrator | 2026-04-06 02:50:26 | INFO  | Variable preparation completed 2026-04-06 02:50:52.068696 | orchestrator | 2026-04-06 02:50:28 | INFO  | Starting inventory overwrite handling 2026-04-06 02:50:52.068704 | orchestrator | 2026-04-06 02:50:28 | INFO  | Handling group overwrites in 99-overwrite 2026-04-06 02:50:52.068712 | orchestrator | 2026-04-06 02:50:28 | INFO  | Removing group frr:children from 60-generic 2026-04-06 02:50:52.068720 | orchestrator | 2026-04-06 02:50:28 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-06 02:50:52.068729 | orchestrator | 2026-04-06 02:50:28 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-06 02:50:52.068767 | orchestrator | 2026-04-06 02:50:28 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-06 02:50:52.068785 | orchestrator | 2026-04-06 02:50:28 | INFO  | Handling group overwrites in 20-roles 2026-04-06 02:50:52.068794 | orchestrator | 2026-04-06 02:50:28 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-06 02:50:52.068803 | orchestrator | 2026-04-06 02:50:28 | INFO  | Removed 5 group(s) in total 2026-04-06 02:50:52.068810 | orchestrator | 2026-04-06 02:50:28 | INFO  | Inventory overwrite handling completed 2026-04-06 02:50:52.068817 | orchestrator | 2026-04-06 02:50:30 | INFO  | Starting merge of inventory files 2026-04-06 02:50:52.068825 | orchestrator | 2026-04-06 02:50:30 | INFO  | Inventory files merged successfully 2026-04-06 02:50:52.068833 | orchestrator | 2026-04-06 02:50:36 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-06 02:50:52.068841 | orchestrator | 2026-04-06 02:50:50 | INFO  | Successfully wrote ClusterShell configuration 2026-04-06 02:50:52.068849 | orchestrator | [master 0cb036d] 2026-04-06-02-50 2026-04-06 02:50:52.068860 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-04-06 02:50:54.706844 | orchestrator | 2026-04-06 02:50:54 | INFO  | Task 5fe58c72-740f-455b-88b3-016679fac190 (ceph-create-lvm-devices) was prepared for execution. 2026-04-06 02:50:54.706946 | orchestrator | 2026-04-06 02:50:54 | INFO  | It takes a moment until task 5fe58c72-740f-455b-88b3-016679fac190 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-06 02:51:08.333195 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-06 02:51:08.333347 | orchestrator | 2.16.14 2026-04-06 02:51:08.333364 | orchestrator | 2026-04-06 02:51:08.333372 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-06 02:51:08.333379 | orchestrator | 2026-04-06 02:51:08.333385 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-06 02:51:08.333391 | orchestrator | Monday 06 April 2026 02:50:59 +0000 (0:00:00.349) 0:00:00.349 ********** 2026-04-06 02:51:08.333398 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 02:51:08.333403 | orchestrator | 2026-04-06 02:51:08.333409 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-06 02:51:08.333415 | orchestrator | Monday 06 April 2026 02:50:59 +0000 (0:00:00.302) 0:00:00.651 ********** 2026-04-06 02:51:08.333421 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:51:08.333427 | orchestrator | 2026-04-06 02:51:08.333435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333444 | orchestrator | Monday 06 April 2026 02:51:00 +0000 (0:00:00.267) 0:00:00.918 ********** 2026-04-06 02:51:08.333467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-06 02:51:08.333494 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-06 02:51:08.333503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-06 02:51:08.333511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-06 02:51:08.333519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-06 02:51:08.333527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-06 02:51:08.333535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-06 02:51:08.333542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-06 02:51:08.333550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-06 02:51:08.333559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-06 02:51:08.333590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-06 02:51:08.333600 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-06 02:51:08.333609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-06 02:51:08.333619 | orchestrator | 2026-04-06 02:51:08.333625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333630 | orchestrator | Monday 06 April 2026 02:51:00 +0000 (0:00:00.604) 0:00:01.523 ********** 2026-04-06 02:51:08.333636 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.333641 | orchestrator | 2026-04-06 02:51:08.333647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333652 | orchestrator | Monday 06 April 2026 02:51:01 +0000 (0:00:00.233) 0:00:01.757 ********** 2026-04-06 02:51:08.333658 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.333663 | orchestrator | 2026-04-06 02:51:08.333669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333674 | orchestrator | Monday 06 April 2026 02:51:01 +0000 (0:00:00.244) 0:00:02.001 ********** 2026-04-06 02:51:08.333679 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.333685 | orchestrator | 2026-04-06 02:51:08.333690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333696 | orchestrator | Monday 06 April 2026 02:51:01 +0000 (0:00:00.231) 0:00:02.232 ********** 2026-04-06 02:51:08.333701 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.333707 | orchestrator | 2026-04-06 02:51:08.333712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333717 | orchestrator | Monday 06 April 2026 02:51:01 +0000 (0:00:00.223) 0:00:02.455 ********** 2026-04-06 02:51:08.333723 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.333728 | orchestrator | 2026-04-06 02:51:08.333734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333757 | orchestrator | Monday 06 April 2026 02:51:01 +0000 (0:00:00.236) 0:00:02.691 ********** 2026-04-06 02:51:08.333780 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.333787 | orchestrator | 2026-04-06 02:51:08.333793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333800 | orchestrator | Monday 06 April 2026 02:51:02 +0000 (0:00:00.252) 0:00:02.944 ********** 2026-04-06 02:51:08.333806 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.333813 | orchestrator | 2026-04-06 02:51:08.333826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333833 | orchestrator | Monday 06 April 2026 02:51:02 +0000 (0:00:00.258) 0:00:03.203 ********** 2026-04-06 02:51:08.333839 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.333846 | orchestrator | 2026-04-06 02:51:08.333853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333859 | orchestrator | Monday 06 April 2026 02:51:02 +0000 (0:00:00.229) 0:00:03.433 ********** 2026-04-06 02:51:08.333866 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa) 2026-04-06 02:51:08.333873 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa) 2026-04-06 02:51:08.333880 | orchestrator | 2026-04-06 02:51:08.333886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333907 | orchestrator | Monday 06 April 2026 02:51:03 +0000 (0:00:00.489) 0:00:03.923 ********** 2026-04-06 02:51:08.333914 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527) 2026-04-06 02:51:08.333920 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527) 2026-04-06 02:51:08.333927 | orchestrator | 2026-04-06 02:51:08.333934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333947 | orchestrator | Monday 06 April 2026 02:51:03 +0000 (0:00:00.764) 0:00:04.687 ********** 2026-04-06 02:51:08.333952 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c) 2026-04-06 02:51:08.333958 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c) 2026-04-06 02:51:08.333963 | orchestrator | 2026-04-06 02:51:08.333969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.333974 | orchestrator | Monday 06 April 2026 02:51:04 +0000 (0:00:00.786) 0:00:05.474 ********** 2026-04-06 02:51:08.333980 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103) 2026-04-06 02:51:08.333990 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103) 2026-04-06 02:51:08.333996 | orchestrator | 2026-04-06 02:51:08.334002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:08.334008 | orchestrator | Monday 06 April 2026 02:51:05 +0000 (0:00:01.008) 0:00:06.483 ********** 2026-04-06 02:51:08.334059 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-06 02:51:08.334066 | orchestrator | 2026-04-06 02:51:08.334071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:08.334077 | orchestrator | Monday 06 April 2026 02:51:06 +0000 (0:00:00.374) 0:00:06.858 ********** 2026-04-06 02:51:08.334082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-06 02:51:08.334088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-06 02:51:08.334093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-06 02:51:08.334099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-06 02:51:08.334104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-06 02:51:08.334109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-06 02:51:08.334115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-06 02:51:08.334120 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-06 02:51:08.334126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-06 02:51:08.334131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-06 02:51:08.334137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-06 02:51:08.334142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-06 02:51:08.334147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-06 02:51:08.334153 | orchestrator | 2026-04-06 02:51:08.334158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:08.334164 | orchestrator | Monday 06 April 2026 02:51:06 +0000 (0:00:00.496) 0:00:07.355 ********** 2026-04-06 02:51:08.334169 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.334175 | orchestrator | 2026-04-06 02:51:08.334180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:08.334186 | orchestrator | Monday 06 April 2026 02:51:06 +0000 (0:00:00.240) 0:00:07.595 ********** 2026-04-06 02:51:08.334191 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.334197 | orchestrator | 2026-04-06 02:51:08.334202 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:08.334208 | orchestrator | Monday 06 April 2026 02:51:07 +0000 (0:00:00.236) 0:00:07.831 ********** 2026-04-06 02:51:08.334213 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.334223 | orchestrator | 2026-04-06 02:51:08.334229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:08.334259 | orchestrator | Monday 06 April 2026 02:51:07 +0000 (0:00:00.234) 0:00:08.065 ********** 2026-04-06 02:51:08.334272 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.334284 | orchestrator | 2026-04-06 02:51:08.334293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:08.334301 | orchestrator | Monday 06 April 2026 02:51:07 +0000 (0:00:00.242) 0:00:08.308 ********** 2026-04-06 02:51:08.334309 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.334318 | orchestrator | 2026-04-06 02:51:08.334326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:08.334345 | orchestrator | Monday 06 April 2026 02:51:07 +0000 (0:00:00.233) 0:00:08.541 ********** 2026-04-06 02:51:08.334362 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.334371 | orchestrator | 2026-04-06 02:51:08.334379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:08.334388 | orchestrator | Monday 06 April 2026 02:51:08 +0000 (0:00:00.251) 0:00:08.793 ********** 2026-04-06 02:51:08.334397 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:08.334406 | orchestrator | 2026-04-06 02:51:08.334422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:17.169963 | orchestrator | Monday 06 April 2026 02:51:08 +0000 (0:00:00.239) 0:00:09.032 ********** 2026-04-06 02:51:17.170104 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170116 | orchestrator | 2026-04-06 02:51:17.170125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:17.170133 | orchestrator | Monday 06 April 2026 02:51:09 +0000 (0:00:00.715) 0:00:09.747 ********** 2026-04-06 02:51:17.170141 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-06 02:51:17.170149 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-06 02:51:17.170157 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-06 02:51:17.170164 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-06 02:51:17.170171 | orchestrator | 2026-04-06 02:51:17.170178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:17.170185 | orchestrator | Monday 06 April 2026 02:51:09 +0000 (0:00:00.760) 0:00:10.508 ********** 2026-04-06 02:51:17.170193 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170200 | orchestrator | 2026-04-06 02:51:17.170207 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:17.170214 | orchestrator | Monday 06 April 2026 02:51:10 +0000 (0:00:00.230) 0:00:10.739 ********** 2026-04-06 02:51:17.170221 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170228 | orchestrator | 2026-04-06 02:51:17.170310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:17.170321 | orchestrator | Monday 06 April 2026 02:51:10 +0000 (0:00:00.214) 0:00:10.953 ********** 2026-04-06 02:51:17.170329 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170336 | orchestrator | 2026-04-06 02:51:17.170344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:17.170352 | orchestrator | Monday 06 April 2026 02:51:10 +0000 (0:00:00.252) 0:00:11.206 ********** 2026-04-06 02:51:17.170360 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170368 | orchestrator | 2026-04-06 02:51:17.170376 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-06 02:51:17.170391 | orchestrator | Monday 06 April 2026 02:51:10 +0000 (0:00:00.225) 0:00:11.432 ********** 2026-04-06 02:51:17.170398 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170405 | orchestrator | 2026-04-06 02:51:17.170412 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-06 02:51:17.170419 | orchestrator | Monday 06 April 2026 02:51:10 +0000 (0:00:00.179) 0:00:11.612 ********** 2026-04-06 02:51:17.170427 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44d7a625-0d29-5597-9a0c-b91ce06f2e33'}}) 2026-04-06 02:51:17.170454 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '33ff4195-b9ae-565c-9501-f62265c8cf2c'}}) 2026-04-06 02:51:17.170461 | orchestrator | 2026-04-06 02:51:17.170468 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-06 02:51:17.170476 | orchestrator | Monday 06 April 2026 02:51:11 +0000 (0:00:00.217) 0:00:11.829 ********** 2026-04-06 02:51:17.170484 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'}) 2026-04-06 02:51:17.170492 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'}) 2026-04-06 02:51:17.170500 | orchestrator | 2026-04-06 02:51:17.170507 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-06 02:51:17.170515 | orchestrator | Monday 06 April 2026 02:51:13 +0000 (0:00:02.052) 0:00:13.882 ********** 2026-04-06 02:51:17.170523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:17.170532 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:17.170540 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170548 | orchestrator | 2026-04-06 02:51:17.170555 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-06 02:51:17.170563 | orchestrator | Monday 06 April 2026 02:51:13 +0000 (0:00:00.167) 0:00:14.050 ********** 2026-04-06 02:51:17.170571 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'}) 2026-04-06 02:51:17.170578 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'}) 2026-04-06 02:51:17.170586 | orchestrator | 2026-04-06 02:51:17.170592 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-06 02:51:17.170599 | orchestrator | Monday 06 April 2026 02:51:14 +0000 (0:00:01.569) 0:00:15.619 ********** 2026-04-06 02:51:17.170606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:17.170613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:17.170620 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170643 | orchestrator | 2026-04-06 02:51:17.170650 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-06 02:51:17.170657 | orchestrator | Monday 06 April 2026 02:51:15 +0000 (0:00:00.190) 0:00:15.810 ********** 2026-04-06 02:51:17.170680 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170688 | orchestrator | 2026-04-06 02:51:17.170695 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-06 02:51:17.170702 | orchestrator | Monday 06 April 2026 02:51:15 +0000 (0:00:00.387) 0:00:16.198 ********** 2026-04-06 02:51:17.170710 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:17.170717 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:17.170724 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170731 | orchestrator | 2026-04-06 02:51:17.170738 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-06 02:51:17.170745 | orchestrator | Monday 06 April 2026 02:51:15 +0000 (0:00:00.168) 0:00:16.366 ********** 2026-04-06 02:51:17.170758 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170765 | orchestrator | 2026-04-06 02:51:17.170772 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-06 02:51:17.170780 | orchestrator | Monday 06 April 2026 02:51:15 +0000 (0:00:00.160) 0:00:16.527 ********** 2026-04-06 02:51:17.170791 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:17.170799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:17.170806 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170813 | orchestrator | 2026-04-06 02:51:17.170820 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-06 02:51:17.170827 | orchestrator | Monday 06 April 2026 02:51:16 +0000 (0:00:00.205) 0:00:16.732 ********** 2026-04-06 02:51:17.170834 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170841 | orchestrator | 2026-04-06 02:51:17.170848 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-06 02:51:17.170855 | orchestrator | Monday 06 April 2026 02:51:16 +0000 (0:00:00.143) 0:00:16.876 ********** 2026-04-06 02:51:17.170862 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:17.170870 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:17.170877 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170884 | orchestrator | 2026-04-06 02:51:17.170891 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-06 02:51:17.170898 | orchestrator | Monday 06 April 2026 02:51:16 +0000 (0:00:00.193) 0:00:17.069 ********** 2026-04-06 02:51:17.170905 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:51:17.170913 | orchestrator | 2026-04-06 02:51:17.170920 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-06 02:51:17.170927 | orchestrator | Monday 06 April 2026 02:51:16 +0000 (0:00:00.149) 0:00:17.218 ********** 2026-04-06 02:51:17.170934 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:17.170941 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:17.170948 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170956 | orchestrator | 2026-04-06 02:51:17.170963 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-06 02:51:17.170970 | orchestrator | Monday 06 April 2026 02:51:16 +0000 (0:00:00.177) 0:00:17.395 ********** 2026-04-06 02:51:17.170977 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:17.170984 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:17.170991 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.170999 | orchestrator | 2026-04-06 02:51:17.171006 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-06 02:51:17.171013 | orchestrator | Monday 06 April 2026 02:51:16 +0000 (0:00:00.169) 0:00:17.565 ********** 2026-04-06 02:51:17.171020 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:17.171027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:17.171039 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.171046 | orchestrator | 2026-04-06 02:51:17.171053 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-06 02:51:17.171060 | orchestrator | Monday 06 April 2026 02:51:17 +0000 (0:00:00.168) 0:00:17.734 ********** 2026-04-06 02:51:17.171068 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:17.171074 | orchestrator | 2026-04-06 02:51:17.171080 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-06 02:51:17.171089 | orchestrator | Monday 06 April 2026 02:51:17 +0000 (0:00:00.140) 0:00:17.874 ********** 2026-04-06 02:51:24.380895 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.380994 | orchestrator | 2026-04-06 02:51:24.381006 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-06 02:51:24.381017 | orchestrator | Monday 06 April 2026 02:51:17 +0000 (0:00:00.162) 0:00:18.037 ********** 2026-04-06 02:51:24.381025 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381034 | orchestrator | 2026-04-06 02:51:24.381043 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-06 02:51:24.381051 | orchestrator | Monday 06 April 2026 02:51:17 +0000 (0:00:00.405) 0:00:18.442 ********** 2026-04-06 02:51:24.381059 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 02:51:24.381068 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-06 02:51:24.381076 | orchestrator | } 2026-04-06 02:51:24.381084 | orchestrator | 2026-04-06 02:51:24.381092 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-06 02:51:24.381100 | orchestrator | Monday 06 April 2026 02:51:17 +0000 (0:00:00.184) 0:00:18.627 ********** 2026-04-06 02:51:24.381108 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 02:51:24.381116 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-06 02:51:24.381124 | orchestrator | } 2026-04-06 02:51:24.381132 | orchestrator | 2026-04-06 02:51:24.381140 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-06 02:51:24.381163 | orchestrator | Monday 06 April 2026 02:51:18 +0000 (0:00:00.166) 0:00:18.793 ********** 2026-04-06 02:51:24.381172 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 02:51:24.381180 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-06 02:51:24.381188 | orchestrator | } 2026-04-06 02:51:24.381196 | orchestrator | 2026-04-06 02:51:24.381203 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-06 02:51:24.381211 | orchestrator | Monday 06 April 2026 02:51:18 +0000 (0:00:00.165) 0:00:18.959 ********** 2026-04-06 02:51:24.381220 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:51:24.381227 | orchestrator | 2026-04-06 02:51:24.381235 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-06 02:51:24.381315 | orchestrator | Monday 06 April 2026 02:51:18 +0000 (0:00:00.675) 0:00:19.634 ********** 2026-04-06 02:51:24.381327 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:51:24.381336 | orchestrator | 2026-04-06 02:51:24.381344 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-06 02:51:24.381352 | orchestrator | Monday 06 April 2026 02:51:19 +0000 (0:00:00.524) 0:00:20.158 ********** 2026-04-06 02:51:24.381360 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:51:24.381368 | orchestrator | 2026-04-06 02:51:24.381376 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-06 02:51:24.381384 | orchestrator | Monday 06 April 2026 02:51:20 +0000 (0:00:00.605) 0:00:20.764 ********** 2026-04-06 02:51:24.381392 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:51:24.381400 | orchestrator | 2026-04-06 02:51:24.381410 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-06 02:51:24.381419 | orchestrator | Monday 06 April 2026 02:51:20 +0000 (0:00:00.165) 0:00:20.929 ********** 2026-04-06 02:51:24.381428 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381438 | orchestrator | 2026-04-06 02:51:24.381446 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-06 02:51:24.381476 | orchestrator | Monday 06 April 2026 02:51:20 +0000 (0:00:00.151) 0:00:21.081 ********** 2026-04-06 02:51:24.381486 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381495 | orchestrator | 2026-04-06 02:51:24.381504 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-06 02:51:24.381513 | orchestrator | Monday 06 April 2026 02:51:20 +0000 (0:00:00.130) 0:00:21.212 ********** 2026-04-06 02:51:24.381522 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 02:51:24.381531 | orchestrator |  "vgs_report": { 2026-04-06 02:51:24.381540 | orchestrator |  "vg": [] 2026-04-06 02:51:24.381549 | orchestrator |  } 2026-04-06 02:51:24.381558 | orchestrator | } 2026-04-06 02:51:24.381567 | orchestrator | 2026-04-06 02:51:24.381576 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-06 02:51:24.381586 | orchestrator | Monday 06 April 2026 02:51:20 +0000 (0:00:00.162) 0:00:21.374 ********** 2026-04-06 02:51:24.381595 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381604 | orchestrator | 2026-04-06 02:51:24.381613 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-06 02:51:24.381623 | orchestrator | Monday 06 April 2026 02:51:20 +0000 (0:00:00.148) 0:00:21.523 ********** 2026-04-06 02:51:24.381631 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381641 | orchestrator | 2026-04-06 02:51:24.381650 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-06 02:51:24.381659 | orchestrator | Monday 06 April 2026 02:51:21 +0000 (0:00:00.384) 0:00:21.907 ********** 2026-04-06 02:51:24.381667 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381677 | orchestrator | 2026-04-06 02:51:24.381686 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-06 02:51:24.381695 | orchestrator | Monday 06 April 2026 02:51:21 +0000 (0:00:00.161) 0:00:22.068 ********** 2026-04-06 02:51:24.381705 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381713 | orchestrator | 2026-04-06 02:51:24.381723 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-06 02:51:24.381731 | orchestrator | Monday 06 April 2026 02:51:21 +0000 (0:00:00.162) 0:00:22.231 ********** 2026-04-06 02:51:24.381741 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381750 | orchestrator | 2026-04-06 02:51:24.381760 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-06 02:51:24.381769 | orchestrator | Monday 06 April 2026 02:51:21 +0000 (0:00:00.154) 0:00:22.386 ********** 2026-04-06 02:51:24.381778 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381787 | orchestrator | 2026-04-06 02:51:24.381795 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-06 02:51:24.381803 | orchestrator | Monday 06 April 2026 02:51:21 +0000 (0:00:00.154) 0:00:22.541 ********** 2026-04-06 02:51:24.381811 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381819 | orchestrator | 2026-04-06 02:51:24.381826 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-06 02:51:24.381834 | orchestrator | Monday 06 April 2026 02:51:21 +0000 (0:00:00.148) 0:00:22.689 ********** 2026-04-06 02:51:24.381858 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381871 | orchestrator | 2026-04-06 02:51:24.381885 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-06 02:51:24.381905 | orchestrator | Monday 06 April 2026 02:51:22 +0000 (0:00:00.156) 0:00:22.846 ********** 2026-04-06 02:51:24.381920 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381933 | orchestrator | 2026-04-06 02:51:24.381946 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-06 02:51:24.381958 | orchestrator | Monday 06 April 2026 02:51:22 +0000 (0:00:00.148) 0:00:22.994 ********** 2026-04-06 02:51:24.381970 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.381983 | orchestrator | 2026-04-06 02:51:24.381997 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-06 02:51:24.382010 | orchestrator | Monday 06 April 2026 02:51:22 +0000 (0:00:00.163) 0:00:23.158 ********** 2026-04-06 02:51:24.382106 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.382115 | orchestrator | 2026-04-06 02:51:24.382123 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-06 02:51:24.382131 | orchestrator | Monday 06 April 2026 02:51:22 +0000 (0:00:00.156) 0:00:23.314 ********** 2026-04-06 02:51:24.382139 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.382147 | orchestrator | 2026-04-06 02:51:24.382161 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-06 02:51:24.382169 | orchestrator | Monday 06 April 2026 02:51:22 +0000 (0:00:00.180) 0:00:23.494 ********** 2026-04-06 02:51:24.382177 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.382185 | orchestrator | 2026-04-06 02:51:24.382193 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-06 02:51:24.382201 | orchestrator | Monday 06 April 2026 02:51:22 +0000 (0:00:00.144) 0:00:23.639 ********** 2026-04-06 02:51:24.382209 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.382217 | orchestrator | 2026-04-06 02:51:24.382225 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-06 02:51:24.382236 | orchestrator | Monday 06 April 2026 02:51:23 +0000 (0:00:00.396) 0:00:24.036 ********** 2026-04-06 02:51:24.382278 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:24.382294 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:24.382307 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.382319 | orchestrator | 2026-04-06 02:51:24.382330 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-06 02:51:24.382342 | orchestrator | Monday 06 April 2026 02:51:23 +0000 (0:00:00.167) 0:00:24.203 ********** 2026-04-06 02:51:24.382354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:24.382367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:24.382380 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.382393 | orchestrator | 2026-04-06 02:51:24.382404 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-06 02:51:24.382417 | orchestrator | Monday 06 April 2026 02:51:23 +0000 (0:00:00.173) 0:00:24.377 ********** 2026-04-06 02:51:24.382429 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:24.382442 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:24.382455 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.382468 | orchestrator | 2026-04-06 02:51:24.382481 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-06 02:51:24.382493 | orchestrator | Monday 06 April 2026 02:51:23 +0000 (0:00:00.175) 0:00:24.553 ********** 2026-04-06 02:51:24.382506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:24.382519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:24.382532 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.382545 | orchestrator | 2026-04-06 02:51:24.382558 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-06 02:51:24.382572 | orchestrator | Monday 06 April 2026 02:51:24 +0000 (0:00:00.181) 0:00:24.735 ********** 2026-04-06 02:51:24.382598 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:24.382612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:24.382621 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:24.382629 | orchestrator | 2026-04-06 02:51:24.382637 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-06 02:51:24.382645 | orchestrator | Monday 06 April 2026 02:51:24 +0000 (0:00:00.161) 0:00:24.896 ********** 2026-04-06 02:51:24.382664 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:30.197159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:30.197239 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:30.197294 | orchestrator | 2026-04-06 02:51:30.197301 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-06 02:51:30.197307 | orchestrator | Monday 06 April 2026 02:51:24 +0000 (0:00:00.189) 0:00:25.085 ********** 2026-04-06 02:51:30.197312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:30.197317 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:30.197322 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:30.197326 | orchestrator | 2026-04-06 02:51:30.197349 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-06 02:51:30.197360 | orchestrator | Monday 06 April 2026 02:51:24 +0000 (0:00:00.168) 0:00:25.254 ********** 2026-04-06 02:51:30.197365 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:30.197370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:30.197374 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:30.197379 | orchestrator | 2026-04-06 02:51:30.197383 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-06 02:51:30.197387 | orchestrator | Monday 06 April 2026 02:51:24 +0000 (0:00:00.180) 0:00:25.434 ********** 2026-04-06 02:51:30.197392 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:51:30.197397 | orchestrator | 2026-04-06 02:51:30.197402 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-06 02:51:30.197406 | orchestrator | Monday 06 April 2026 02:51:25 +0000 (0:00:00.561) 0:00:25.996 ********** 2026-04-06 02:51:30.197411 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:51:30.197415 | orchestrator | 2026-04-06 02:51:30.197419 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-06 02:51:30.197424 | orchestrator | Monday 06 April 2026 02:51:25 +0000 (0:00:00.532) 0:00:26.529 ********** 2026-04-06 02:51:30.197428 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:51:30.197439 | orchestrator | 2026-04-06 02:51:30.197443 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-06 02:51:30.197448 | orchestrator | Monday 06 April 2026 02:51:25 +0000 (0:00:00.156) 0:00:26.686 ********** 2026-04-06 02:51:30.197453 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'vg_name': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'}) 2026-04-06 02:51:30.197458 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'vg_name': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'}) 2026-04-06 02:51:30.197475 | orchestrator | 2026-04-06 02:51:30.197480 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-06 02:51:30.197484 | orchestrator | Monday 06 April 2026 02:51:26 +0000 (0:00:00.192) 0:00:26.878 ********** 2026-04-06 02:51:30.197489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:30.197493 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:30.197497 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:30.197502 | orchestrator | 2026-04-06 02:51:30.197506 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-06 02:51:30.197510 | orchestrator | Monday 06 April 2026 02:51:26 +0000 (0:00:00.427) 0:00:27.306 ********** 2026-04-06 02:51:30.197515 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:30.197519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:30.197523 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:30.197528 | orchestrator | 2026-04-06 02:51:30.197532 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-06 02:51:30.197536 | orchestrator | Monday 06 April 2026 02:51:26 +0000 (0:00:00.164) 0:00:27.471 ********** 2026-04-06 02:51:30.197541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 02:51:30.197545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 02:51:30.197549 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:51:30.197554 | orchestrator | 2026-04-06 02:51:30.197558 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-06 02:51:30.197562 | orchestrator | Monday 06 April 2026 02:51:26 +0000 (0:00:00.170) 0:00:27.641 ********** 2026-04-06 02:51:30.197579 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 02:51:30.197584 | orchestrator |  "lvm_report": { 2026-04-06 02:51:30.197588 | orchestrator |  "lv": [ 2026-04-06 02:51:30.197593 | orchestrator |  { 2026-04-06 02:51:30.197597 | orchestrator |  "lv_name": "osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c", 2026-04-06 02:51:30.197602 | orchestrator |  "vg_name": "ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c" 2026-04-06 02:51:30.197607 | orchestrator |  }, 2026-04-06 02:51:30.197611 | orchestrator |  { 2026-04-06 02:51:30.197616 | orchestrator |  "lv_name": "osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33", 2026-04-06 02:51:30.197620 | orchestrator |  "vg_name": "ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33" 2026-04-06 02:51:30.197624 | orchestrator |  } 2026-04-06 02:51:30.197629 | orchestrator |  ], 2026-04-06 02:51:30.197633 | orchestrator |  "pv": [ 2026-04-06 02:51:30.197638 | orchestrator |  { 2026-04-06 02:51:30.197642 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-06 02:51:30.197646 | orchestrator |  "vg_name": "ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33" 2026-04-06 02:51:30.197651 | orchestrator |  }, 2026-04-06 02:51:30.197655 | orchestrator |  { 2026-04-06 02:51:30.197664 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-06 02:51:30.197669 | orchestrator |  "vg_name": "ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c" 2026-04-06 02:51:30.197673 | orchestrator |  } 2026-04-06 02:51:30.197678 | orchestrator |  ] 2026-04-06 02:51:30.197682 | orchestrator |  } 2026-04-06 02:51:30.197687 | orchestrator | } 2026-04-06 02:51:30.197696 | orchestrator | 2026-04-06 02:51:30.197700 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-06 02:51:30.197705 | orchestrator | 2026-04-06 02:51:30.197709 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-06 02:51:30.197714 | orchestrator | Monday 06 April 2026 02:51:27 +0000 (0:00:00.335) 0:00:27.976 ********** 2026-04-06 02:51:30.197719 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-06 02:51:30.197724 | orchestrator | 2026-04-06 02:51:30.197729 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-06 02:51:30.197734 | orchestrator | Monday 06 April 2026 02:51:27 +0000 (0:00:00.290) 0:00:28.267 ********** 2026-04-06 02:51:30.197740 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:51:30.197745 | orchestrator | 2026-04-06 02:51:30.197750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:30.197755 | orchestrator | Monday 06 April 2026 02:51:27 +0000 (0:00:00.271) 0:00:28.538 ********** 2026-04-06 02:51:30.197760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-06 02:51:30.197765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-06 02:51:30.197770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-06 02:51:30.197775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-06 02:51:30.197780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-06 02:51:30.197785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-06 02:51:30.197790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-06 02:51:30.197795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-06 02:51:30.197800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-06 02:51:30.197805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-06 02:51:30.197810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-06 02:51:30.197815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-06 02:51:30.197820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-06 02:51:30.197825 | orchestrator | 2026-04-06 02:51:30.197830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:30.197835 | orchestrator | Monday 06 April 2026 02:51:28 +0000 (0:00:00.474) 0:00:29.012 ********** 2026-04-06 02:51:30.197840 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:30.197845 | orchestrator | 2026-04-06 02:51:30.197850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:30.197855 | orchestrator | Monday 06 April 2026 02:51:28 +0000 (0:00:00.249) 0:00:29.262 ********** 2026-04-06 02:51:30.197860 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:30.197865 | orchestrator | 2026-04-06 02:51:30.197870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:30.197875 | orchestrator | Monday 06 April 2026 02:51:29 +0000 (0:00:00.725) 0:00:29.988 ********** 2026-04-06 02:51:30.197880 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:30.197885 | orchestrator | 2026-04-06 02:51:30.197890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:30.197896 | orchestrator | Monday 06 April 2026 02:51:29 +0000 (0:00:00.223) 0:00:30.212 ********** 2026-04-06 02:51:30.197901 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:30.197905 | orchestrator | 2026-04-06 02:51:30.197910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:30.197915 | orchestrator | Monday 06 April 2026 02:51:29 +0000 (0:00:00.225) 0:00:30.437 ********** 2026-04-06 02:51:30.197924 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:30.197929 | orchestrator | 2026-04-06 02:51:30.197934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:30.197940 | orchestrator | Monday 06 April 2026 02:51:29 +0000 (0:00:00.218) 0:00:30.656 ********** 2026-04-06 02:51:30.197944 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:30.197950 | orchestrator | 2026-04-06 02:51:30.197957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:42.684036 | orchestrator | Monday 06 April 2026 02:51:30 +0000 (0:00:00.246) 0:00:30.902 ********** 2026-04-06 02:51:42.684187 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.684215 | orchestrator | 2026-04-06 02:51:42.684236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:42.684254 | orchestrator | Monday 06 April 2026 02:51:30 +0000 (0:00:00.241) 0:00:31.144 ********** 2026-04-06 02:51:42.684389 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.684411 | orchestrator | 2026-04-06 02:51:42.684430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:42.684450 | orchestrator | Monday 06 April 2026 02:51:30 +0000 (0:00:00.216) 0:00:31.360 ********** 2026-04-06 02:51:42.684470 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336) 2026-04-06 02:51:42.684491 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336) 2026-04-06 02:51:42.684511 | orchestrator | 2026-04-06 02:51:42.684552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:42.684574 | orchestrator | Monday 06 April 2026 02:51:31 +0000 (0:00:00.495) 0:00:31.855 ********** 2026-04-06 02:51:42.684597 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554) 2026-04-06 02:51:42.684617 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554) 2026-04-06 02:51:42.684636 | orchestrator | 2026-04-06 02:51:42.684657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:42.684676 | orchestrator | Monday 06 April 2026 02:51:31 +0000 (0:00:00.519) 0:00:32.375 ********** 2026-04-06 02:51:42.684696 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867) 2026-04-06 02:51:42.684716 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867) 2026-04-06 02:51:42.684736 | orchestrator | 2026-04-06 02:51:42.684753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:42.684769 | orchestrator | Monday 06 April 2026 02:51:32 +0000 (0:00:00.457) 0:00:32.833 ********** 2026-04-06 02:51:42.684784 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de) 2026-04-06 02:51:42.684805 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de) 2026-04-06 02:51:42.684831 | orchestrator | 2026-04-06 02:51:42.684850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:51:42.684868 | orchestrator | Monday 06 April 2026 02:51:32 +0000 (0:00:00.726) 0:00:33.560 ********** 2026-04-06 02:51:42.684884 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-06 02:51:42.684901 | orchestrator | 2026-04-06 02:51:42.684918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.684935 | orchestrator | Monday 06 April 2026 02:51:33 +0000 (0:00:00.634) 0:00:34.195 ********** 2026-04-06 02:51:42.684952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-06 02:51:42.684970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-06 02:51:42.684988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-06 02:51:42.685039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-06 02:51:42.685060 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-06 02:51:42.685078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-06 02:51:42.685096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-06 02:51:42.685115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-06 02:51:42.685127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-06 02:51:42.685137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-06 02:51:42.685148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-06 02:51:42.685159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-06 02:51:42.685170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-06 02:51:42.685180 | orchestrator | 2026-04-06 02:51:42.685191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685202 | orchestrator | Monday 06 April 2026 02:51:34 +0000 (0:00:01.023) 0:00:35.218 ********** 2026-04-06 02:51:42.685213 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685224 | orchestrator | 2026-04-06 02:51:42.685235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685246 | orchestrator | Monday 06 April 2026 02:51:34 +0000 (0:00:00.213) 0:00:35.432 ********** 2026-04-06 02:51:42.685285 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685297 | orchestrator | 2026-04-06 02:51:42.685308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685319 | orchestrator | Monday 06 April 2026 02:51:34 +0000 (0:00:00.233) 0:00:35.665 ********** 2026-04-06 02:51:42.685330 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685341 | orchestrator | 2026-04-06 02:51:42.685376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685388 | orchestrator | Monday 06 April 2026 02:51:35 +0000 (0:00:00.247) 0:00:35.913 ********** 2026-04-06 02:51:42.685399 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685410 | orchestrator | 2026-04-06 02:51:42.685421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685432 | orchestrator | Monday 06 April 2026 02:51:35 +0000 (0:00:00.232) 0:00:36.146 ********** 2026-04-06 02:51:42.685443 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685454 | orchestrator | 2026-04-06 02:51:42.685465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685476 | orchestrator | Monday 06 April 2026 02:51:35 +0000 (0:00:00.204) 0:00:36.351 ********** 2026-04-06 02:51:42.685488 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685498 | orchestrator | 2026-04-06 02:51:42.685509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685520 | orchestrator | Monday 06 April 2026 02:51:35 +0000 (0:00:00.235) 0:00:36.586 ********** 2026-04-06 02:51:42.685541 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685552 | orchestrator | 2026-04-06 02:51:42.685563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685574 | orchestrator | Monday 06 April 2026 02:51:36 +0000 (0:00:00.236) 0:00:36.822 ********** 2026-04-06 02:51:42.685585 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685595 | orchestrator | 2026-04-06 02:51:42.685606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685617 | orchestrator | Monday 06 April 2026 02:51:36 +0000 (0:00:00.205) 0:00:37.028 ********** 2026-04-06 02:51:42.685628 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-06 02:51:42.685650 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-06 02:51:42.685661 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-06 02:51:42.685672 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-06 02:51:42.685683 | orchestrator | 2026-04-06 02:51:42.685694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685705 | orchestrator | Monday 06 April 2026 02:51:37 +0000 (0:00:01.017) 0:00:38.045 ********** 2026-04-06 02:51:42.685716 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685727 | orchestrator | 2026-04-06 02:51:42.685738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685748 | orchestrator | Monday 06 April 2026 02:51:38 +0000 (0:00:00.726) 0:00:38.771 ********** 2026-04-06 02:51:42.685759 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685770 | orchestrator | 2026-04-06 02:51:42.685781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685792 | orchestrator | Monday 06 April 2026 02:51:38 +0000 (0:00:00.223) 0:00:38.995 ********** 2026-04-06 02:51:42.685803 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685813 | orchestrator | 2026-04-06 02:51:42.685824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:51:42.685835 | orchestrator | Monday 06 April 2026 02:51:38 +0000 (0:00:00.243) 0:00:39.238 ********** 2026-04-06 02:51:42.685846 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685857 | orchestrator | 2026-04-06 02:51:42.685868 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-06 02:51:42.685879 | orchestrator | Monday 06 April 2026 02:51:38 +0000 (0:00:00.263) 0:00:39.502 ********** 2026-04-06 02:51:42.685889 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.685900 | orchestrator | 2026-04-06 02:51:42.685911 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-06 02:51:42.685922 | orchestrator | Monday 06 April 2026 02:51:38 +0000 (0:00:00.191) 0:00:39.694 ********** 2026-04-06 02:51:42.685933 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}}) 2026-04-06 02:51:42.685945 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8c307d7c-3927-5061-a8a8-155bb148bb1a'}}) 2026-04-06 02:51:42.685955 | orchestrator | 2026-04-06 02:51:42.685966 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-06 02:51:42.685977 | orchestrator | Monday 06 April 2026 02:51:39 +0000 (0:00:00.232) 0:00:39.927 ********** 2026-04-06 02:51:42.685990 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}) 2026-04-06 02:51:42.686002 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'}) 2026-04-06 02:51:42.686013 | orchestrator | 2026-04-06 02:51:42.686090 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-06 02:51:42.686102 | orchestrator | Monday 06 April 2026 02:51:41 +0000 (0:00:01.941) 0:00:41.868 ********** 2026-04-06 02:51:42.686149 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:42.686164 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:42.686175 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:42.686186 | orchestrator | 2026-04-06 02:51:42.686197 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-06 02:51:42.686208 | orchestrator | Monday 06 April 2026 02:51:41 +0000 (0:00:00.170) 0:00:42.038 ********** 2026-04-06 02:51:42.686219 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}) 2026-04-06 02:51:42.686249 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'}) 2026-04-06 02:51:48.992794 | orchestrator | 2026-04-06 02:51:48.992887 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-06 02:51:48.992899 | orchestrator | Monday 06 April 2026 02:51:42 +0000 (0:00:01.347) 0:00:43.386 ********** 2026-04-06 02:51:48.992907 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:48.992916 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:48.992924 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.992931 | orchestrator | 2026-04-06 02:51:48.992952 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-06 02:51:48.992959 | orchestrator | Monday 06 April 2026 02:51:42 +0000 (0:00:00.167) 0:00:43.553 ********** 2026-04-06 02:51:48.992966 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.992974 | orchestrator | 2026-04-06 02:51:48.992980 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-06 02:51:48.992987 | orchestrator | Monday 06 April 2026 02:51:42 +0000 (0:00:00.143) 0:00:43.697 ********** 2026-04-06 02:51:48.992994 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:48.993001 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:48.993008 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993015 | orchestrator | 2026-04-06 02:51:48.993022 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-06 02:51:48.993029 | orchestrator | Monday 06 April 2026 02:51:43 +0000 (0:00:00.164) 0:00:43.861 ********** 2026-04-06 02:51:48.993036 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993042 | orchestrator | 2026-04-06 02:51:48.993049 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-06 02:51:48.993056 | orchestrator | Monday 06 April 2026 02:51:43 +0000 (0:00:00.169) 0:00:44.031 ********** 2026-04-06 02:51:48.993063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:48.993070 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:48.993077 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993084 | orchestrator | 2026-04-06 02:51:48.993091 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-06 02:51:48.993098 | orchestrator | Monday 06 April 2026 02:51:43 +0000 (0:00:00.440) 0:00:44.472 ********** 2026-04-06 02:51:48.993105 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993112 | orchestrator | 2026-04-06 02:51:48.993118 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-06 02:51:48.993125 | orchestrator | Monday 06 April 2026 02:51:43 +0000 (0:00:00.167) 0:00:44.639 ********** 2026-04-06 02:51:48.993132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:48.993139 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:48.993146 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993153 | orchestrator | 2026-04-06 02:51:48.993160 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-06 02:51:48.993183 | orchestrator | Monday 06 April 2026 02:51:44 +0000 (0:00:00.181) 0:00:44.820 ********** 2026-04-06 02:51:48.993190 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:51:48.993197 | orchestrator | 2026-04-06 02:51:48.993204 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-06 02:51:48.993211 | orchestrator | Monday 06 April 2026 02:51:44 +0000 (0:00:00.170) 0:00:44.991 ********** 2026-04-06 02:51:48.993218 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:48.993225 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:48.993233 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993244 | orchestrator | 2026-04-06 02:51:48.993255 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-06 02:51:48.993291 | orchestrator | Monday 06 April 2026 02:51:44 +0000 (0:00:00.167) 0:00:45.158 ********** 2026-04-06 02:51:48.993299 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:48.993306 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:48.993312 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993319 | orchestrator | 2026-04-06 02:51:48.993326 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-06 02:51:48.993347 | orchestrator | Monday 06 April 2026 02:51:44 +0000 (0:00:00.204) 0:00:45.363 ********** 2026-04-06 02:51:48.993355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:48.993362 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:48.993368 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993375 | orchestrator | 2026-04-06 02:51:48.993382 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-06 02:51:48.993389 | orchestrator | Monday 06 April 2026 02:51:44 +0000 (0:00:00.189) 0:00:45.553 ********** 2026-04-06 02:51:48.993401 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993409 | orchestrator | 2026-04-06 02:51:48.993416 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-06 02:51:48.993422 | orchestrator | Monday 06 April 2026 02:51:45 +0000 (0:00:00.166) 0:00:45.719 ********** 2026-04-06 02:51:48.993429 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993436 | orchestrator | 2026-04-06 02:51:48.993443 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-06 02:51:48.993450 | orchestrator | Monday 06 April 2026 02:51:45 +0000 (0:00:00.154) 0:00:45.874 ********** 2026-04-06 02:51:48.993457 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993463 | orchestrator | 2026-04-06 02:51:48.993470 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-06 02:51:48.993481 | orchestrator | Monday 06 April 2026 02:51:45 +0000 (0:00:00.153) 0:00:46.028 ********** 2026-04-06 02:51:48.993491 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 02:51:48.993501 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-06 02:51:48.993512 | orchestrator | } 2026-04-06 02:51:48.993523 | orchestrator | 2026-04-06 02:51:48.993534 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-06 02:51:48.993545 | orchestrator | Monday 06 April 2026 02:51:45 +0000 (0:00:00.161) 0:00:46.189 ********** 2026-04-06 02:51:48.993572 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 02:51:48.993583 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-06 02:51:48.993604 | orchestrator | } 2026-04-06 02:51:48.993616 | orchestrator | 2026-04-06 02:51:48.993627 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-06 02:51:48.993637 | orchestrator | Monday 06 April 2026 02:51:45 +0000 (0:00:00.144) 0:00:46.334 ********** 2026-04-06 02:51:48.993648 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 02:51:48.993659 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-06 02:51:48.993670 | orchestrator | } 2026-04-06 02:51:48.993677 | orchestrator | 2026-04-06 02:51:48.993684 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-06 02:51:48.993691 | orchestrator | Monday 06 April 2026 02:51:46 +0000 (0:00:00.437) 0:00:46.771 ********** 2026-04-06 02:51:48.993698 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:51:48.993704 | orchestrator | 2026-04-06 02:51:48.993711 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-06 02:51:48.993718 | orchestrator | Monday 06 April 2026 02:51:46 +0000 (0:00:00.549) 0:00:47.321 ********** 2026-04-06 02:51:48.993724 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:51:48.993731 | orchestrator | 2026-04-06 02:51:48.993738 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-06 02:51:48.993744 | orchestrator | Monday 06 April 2026 02:51:47 +0000 (0:00:00.554) 0:00:47.875 ********** 2026-04-06 02:51:48.993751 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:51:48.993758 | orchestrator | 2026-04-06 02:51:48.993764 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-06 02:51:48.993771 | orchestrator | Monday 06 April 2026 02:51:47 +0000 (0:00:00.559) 0:00:48.435 ********** 2026-04-06 02:51:48.993778 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:51:48.993785 | orchestrator | 2026-04-06 02:51:48.993791 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-06 02:51:48.993798 | orchestrator | Monday 06 April 2026 02:51:47 +0000 (0:00:00.191) 0:00:48.627 ********** 2026-04-06 02:51:48.993805 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993812 | orchestrator | 2026-04-06 02:51:48.993818 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-06 02:51:48.993825 | orchestrator | Monday 06 April 2026 02:51:48 +0000 (0:00:00.158) 0:00:48.785 ********** 2026-04-06 02:51:48.993832 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993839 | orchestrator | 2026-04-06 02:51:48.993845 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-06 02:51:48.993852 | orchestrator | Monday 06 April 2026 02:51:48 +0000 (0:00:00.123) 0:00:48.909 ********** 2026-04-06 02:51:48.993859 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 02:51:48.993866 | orchestrator |  "vgs_report": { 2026-04-06 02:51:48.993872 | orchestrator |  "vg": [] 2026-04-06 02:51:48.993879 | orchestrator |  } 2026-04-06 02:51:48.993886 | orchestrator | } 2026-04-06 02:51:48.993893 | orchestrator | 2026-04-06 02:51:48.993899 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-06 02:51:48.993906 | orchestrator | Monday 06 April 2026 02:51:48 +0000 (0:00:00.158) 0:00:49.068 ********** 2026-04-06 02:51:48.993913 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993919 | orchestrator | 2026-04-06 02:51:48.993926 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-06 02:51:48.993933 | orchestrator | Monday 06 April 2026 02:51:48 +0000 (0:00:00.148) 0:00:49.217 ********** 2026-04-06 02:51:48.993940 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993946 | orchestrator | 2026-04-06 02:51:48.993953 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-06 02:51:48.993960 | orchestrator | Monday 06 April 2026 02:51:48 +0000 (0:00:00.152) 0:00:49.370 ********** 2026-04-06 02:51:48.993966 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.993974 | orchestrator | 2026-04-06 02:51:48.993985 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-06 02:51:48.993996 | orchestrator | Monday 06 April 2026 02:51:48 +0000 (0:00:00.139) 0:00:49.509 ********** 2026-04-06 02:51:48.994075 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:48.994088 | orchestrator | 2026-04-06 02:51:48.994104 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-06 02:51:54.412793 | orchestrator | Monday 06 April 2026 02:51:48 +0000 (0:00:00.186) 0:00:49.695 ********** 2026-04-06 02:51:54.412901 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.412915 | orchestrator | 2026-04-06 02:51:54.412928 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-06 02:51:54.412943 | orchestrator | Monday 06 April 2026 02:51:49 +0000 (0:00:00.401) 0:00:50.097 ********** 2026-04-06 02:51:54.412955 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.412969 | orchestrator | 2026-04-06 02:51:54.412983 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-06 02:51:54.412996 | orchestrator | Monday 06 April 2026 02:51:49 +0000 (0:00:00.171) 0:00:50.269 ********** 2026-04-06 02:51:54.413010 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413024 | orchestrator | 2026-04-06 02:51:54.413055 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-06 02:51:54.413070 | orchestrator | Monday 06 April 2026 02:51:49 +0000 (0:00:00.173) 0:00:50.442 ********** 2026-04-06 02:51:54.413084 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413097 | orchestrator | 2026-04-06 02:51:54.413111 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-06 02:51:54.413125 | orchestrator | Monday 06 April 2026 02:51:49 +0000 (0:00:00.147) 0:00:50.590 ********** 2026-04-06 02:51:54.413134 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413143 | orchestrator | 2026-04-06 02:51:54.413151 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-06 02:51:54.413159 | orchestrator | Monday 06 April 2026 02:51:50 +0000 (0:00:00.166) 0:00:50.757 ********** 2026-04-06 02:51:54.413167 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413175 | orchestrator | 2026-04-06 02:51:54.413183 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-06 02:51:54.413192 | orchestrator | Monday 06 April 2026 02:51:50 +0000 (0:00:00.149) 0:00:50.906 ********** 2026-04-06 02:51:54.413200 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413208 | orchestrator | 2026-04-06 02:51:54.413216 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-06 02:51:54.413224 | orchestrator | Monday 06 April 2026 02:51:50 +0000 (0:00:00.171) 0:00:51.077 ********** 2026-04-06 02:51:54.413232 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413240 | orchestrator | 2026-04-06 02:51:54.413248 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-06 02:51:54.413256 | orchestrator | Monday 06 April 2026 02:51:50 +0000 (0:00:00.162) 0:00:51.240 ********** 2026-04-06 02:51:54.413299 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413315 | orchestrator | 2026-04-06 02:51:54.413335 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-06 02:51:54.413354 | orchestrator | Monday 06 April 2026 02:51:50 +0000 (0:00:00.152) 0:00:51.392 ********** 2026-04-06 02:51:54.413366 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413380 | orchestrator | 2026-04-06 02:51:54.413395 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-06 02:51:54.413409 | orchestrator | Monday 06 April 2026 02:51:50 +0000 (0:00:00.161) 0:00:51.554 ********** 2026-04-06 02:51:54.413425 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:54.413441 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:54.413454 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413466 | orchestrator | 2026-04-06 02:51:54.413480 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-06 02:51:54.413520 | orchestrator | Monday 06 April 2026 02:51:51 +0000 (0:00:00.185) 0:00:51.739 ********** 2026-04-06 02:51:54.413536 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:54.413549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:54.413564 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413578 | orchestrator | 2026-04-06 02:51:54.413593 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-06 02:51:54.413608 | orchestrator | Monday 06 April 2026 02:51:51 +0000 (0:00:00.176) 0:00:51.916 ********** 2026-04-06 02:51:54.413624 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:54.413639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:54.413652 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413663 | orchestrator | 2026-04-06 02:51:54.413672 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-06 02:51:54.413683 | orchestrator | Monday 06 April 2026 02:51:51 +0000 (0:00:00.452) 0:00:52.368 ********** 2026-04-06 02:51:54.413692 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:54.413701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:54.413709 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413717 | orchestrator | 2026-04-06 02:51:54.413744 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-06 02:51:54.413753 | orchestrator | Monday 06 April 2026 02:51:51 +0000 (0:00:00.185) 0:00:52.554 ********** 2026-04-06 02:51:54.413761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:54.413769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:54.413777 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413785 | orchestrator | 2026-04-06 02:51:54.413801 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-06 02:51:54.413810 | orchestrator | Monday 06 April 2026 02:51:52 +0000 (0:00:00.157) 0:00:52.711 ********** 2026-04-06 02:51:54.413818 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:54.413826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:54.413834 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413842 | orchestrator | 2026-04-06 02:51:54.413850 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-06 02:51:54.413858 | orchestrator | Monday 06 April 2026 02:51:52 +0000 (0:00:00.200) 0:00:52.912 ********** 2026-04-06 02:51:54.413866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:54.413874 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:54.413881 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413897 | orchestrator | 2026-04-06 02:51:54.413906 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-06 02:51:54.413913 | orchestrator | Monday 06 April 2026 02:51:52 +0000 (0:00:00.194) 0:00:53.106 ********** 2026-04-06 02:51:54.413941 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:54.413965 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:54.413980 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.413993 | orchestrator | 2026-04-06 02:51:54.414006 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-06 02:51:54.414072 | orchestrator | Monday 06 April 2026 02:51:52 +0000 (0:00:00.191) 0:00:53.297 ********** 2026-04-06 02:51:54.414083 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:51:54.414092 | orchestrator | 2026-04-06 02:51:54.414100 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-06 02:51:54.414108 | orchestrator | Monday 06 April 2026 02:51:53 +0000 (0:00:00.553) 0:00:53.851 ********** 2026-04-06 02:51:54.414116 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:51:54.414123 | orchestrator | 2026-04-06 02:51:54.414131 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-06 02:51:54.414139 | orchestrator | Monday 06 April 2026 02:51:53 +0000 (0:00:00.581) 0:00:54.433 ********** 2026-04-06 02:51:54.414147 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:51:54.414154 | orchestrator | 2026-04-06 02:51:54.414162 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-06 02:51:54.414170 | orchestrator | Monday 06 April 2026 02:51:53 +0000 (0:00:00.152) 0:00:54.586 ********** 2026-04-06 02:51:54.414178 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'vg_name': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'}) 2026-04-06 02:51:54.414187 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'vg_name': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}) 2026-04-06 02:51:54.414195 | orchestrator | 2026-04-06 02:51:54.414203 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-06 02:51:54.414211 | orchestrator | Monday 06 April 2026 02:51:54 +0000 (0:00:00.193) 0:00:54.779 ********** 2026-04-06 02:51:54.414219 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:54.414227 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:51:54.414235 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:51:54.414243 | orchestrator | 2026-04-06 02:51:54.414251 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-06 02:51:54.414259 | orchestrator | Monday 06 April 2026 02:51:54 +0000 (0:00:00.167) 0:00:54.947 ********** 2026-04-06 02:51:54.414298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:51:54.414316 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:52:01.672062 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:52:01.672168 | orchestrator | 2026-04-06 02:52:01.672182 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-06 02:52:01.672193 | orchestrator | Monday 06 April 2026 02:51:54 +0000 (0:00:00.172) 0:00:55.119 ********** 2026-04-06 02:52:01.672201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 02:52:01.672249 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 02:52:01.672260 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:52:01.672306 | orchestrator | 2026-04-06 02:52:01.672316 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-06 02:52:01.672325 | orchestrator | Monday 06 April 2026 02:51:54 +0000 (0:00:00.389) 0:00:55.509 ********** 2026-04-06 02:52:01.672333 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 02:52:01.672342 | orchestrator |  "lvm_report": { 2026-04-06 02:52:01.672351 | orchestrator |  "lv": [ 2026-04-06 02:52:01.672360 | orchestrator |  { 2026-04-06 02:52:01.672368 | orchestrator |  "lv_name": "osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a", 2026-04-06 02:52:01.672377 | orchestrator |  "vg_name": "ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a" 2026-04-06 02:52:01.672385 | orchestrator |  }, 2026-04-06 02:52:01.672393 | orchestrator |  { 2026-04-06 02:52:01.672402 | orchestrator |  "lv_name": "osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3", 2026-04-06 02:52:01.672410 | orchestrator |  "vg_name": "ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3" 2026-04-06 02:52:01.672418 | orchestrator |  } 2026-04-06 02:52:01.672427 | orchestrator |  ], 2026-04-06 02:52:01.672435 | orchestrator |  "pv": [ 2026-04-06 02:52:01.672443 | orchestrator |  { 2026-04-06 02:52:01.672451 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-06 02:52:01.672459 | orchestrator |  "vg_name": "ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3" 2026-04-06 02:52:01.672469 | orchestrator |  }, 2026-04-06 02:52:01.672477 | orchestrator |  { 2026-04-06 02:52:01.672485 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-06 02:52:01.672493 | orchestrator |  "vg_name": "ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a" 2026-04-06 02:52:01.672501 | orchestrator |  } 2026-04-06 02:52:01.672510 | orchestrator |  ] 2026-04-06 02:52:01.672518 | orchestrator |  } 2026-04-06 02:52:01.672527 | orchestrator | } 2026-04-06 02:52:01.672536 | orchestrator | 2026-04-06 02:52:01.672544 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-06 02:52:01.672552 | orchestrator | 2026-04-06 02:52:01.672560 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-06 02:52:01.672568 | orchestrator | Monday 06 April 2026 02:51:55 +0000 (0:00:00.344) 0:00:55.854 ********** 2026-04-06 02:52:01.672576 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-06 02:52:01.672585 | orchestrator | 2026-04-06 02:52:01.672594 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-06 02:52:01.672603 | orchestrator | Monday 06 April 2026 02:51:55 +0000 (0:00:00.284) 0:00:56.139 ********** 2026-04-06 02:52:01.672613 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:01.672621 | orchestrator | 2026-04-06 02:52:01.672630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.672639 | orchestrator | Monday 06 April 2026 02:51:55 +0000 (0:00:00.271) 0:00:56.410 ********** 2026-04-06 02:52:01.672649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-06 02:52:01.672657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-06 02:52:01.672666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-06 02:52:01.672675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-06 02:52:01.672684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-06 02:52:01.672693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-06 02:52:01.672702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-06 02:52:01.672721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-06 02:52:01.672731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-06 02:52:01.672740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-06 02:52:01.672749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-06 02:52:01.672758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-06 02:52:01.672766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-06 02:52:01.672775 | orchestrator | 2026-04-06 02:52:01.672784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.672793 | orchestrator | Monday 06 April 2026 02:51:56 +0000 (0:00:00.476) 0:00:56.887 ********** 2026-04-06 02:52:01.672802 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:01.672811 | orchestrator | 2026-04-06 02:52:01.672820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.672829 | orchestrator | Monday 06 April 2026 02:51:56 +0000 (0:00:00.240) 0:00:57.128 ********** 2026-04-06 02:52:01.672837 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:01.672846 | orchestrator | 2026-04-06 02:52:01.672855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.672883 | orchestrator | Monday 06 April 2026 02:51:56 +0000 (0:00:00.221) 0:00:57.350 ********** 2026-04-06 02:52:01.672893 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:01.672902 | orchestrator | 2026-04-06 02:52:01.672911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.672920 | orchestrator | Monday 06 April 2026 02:51:56 +0000 (0:00:00.246) 0:00:57.596 ********** 2026-04-06 02:52:01.672928 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:01.672938 | orchestrator | 2026-04-06 02:52:01.672947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.672957 | orchestrator | Monday 06 April 2026 02:51:57 +0000 (0:00:00.751) 0:00:58.348 ********** 2026-04-06 02:52:01.672966 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:01.672974 | orchestrator | 2026-04-06 02:52:01.672982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.672991 | orchestrator | Monday 06 April 2026 02:51:57 +0000 (0:00:00.222) 0:00:58.571 ********** 2026-04-06 02:52:01.672999 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:01.673007 | orchestrator | 2026-04-06 02:52:01.673016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.673024 | orchestrator | Monday 06 April 2026 02:51:58 +0000 (0:00:00.226) 0:00:58.798 ********** 2026-04-06 02:52:01.673033 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:01.673041 | orchestrator | 2026-04-06 02:52:01.673050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.673058 | orchestrator | Monday 06 April 2026 02:51:58 +0000 (0:00:00.209) 0:00:59.007 ********** 2026-04-06 02:52:01.673066 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:01.673074 | orchestrator | 2026-04-06 02:52:01.673083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.673091 | orchestrator | Monday 06 April 2026 02:51:58 +0000 (0:00:00.230) 0:00:59.237 ********** 2026-04-06 02:52:01.673100 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8) 2026-04-06 02:52:01.673110 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8) 2026-04-06 02:52:01.673118 | orchestrator | 2026-04-06 02:52:01.673127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.673135 | orchestrator | Monday 06 April 2026 02:51:59 +0000 (0:00:00.505) 0:00:59.743 ********** 2026-04-06 02:52:01.673221 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d) 2026-04-06 02:52:01.673246 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d) 2026-04-06 02:52:01.673256 | orchestrator | 2026-04-06 02:52:01.673265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.673305 | orchestrator | Monday 06 April 2026 02:51:59 +0000 (0:00:00.480) 0:01:00.223 ********** 2026-04-06 02:52:01.673314 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0) 2026-04-06 02:52:01.673323 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0) 2026-04-06 02:52:01.673331 | orchestrator | 2026-04-06 02:52:01.673340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.673348 | orchestrator | Monday 06 April 2026 02:51:59 +0000 (0:00:00.475) 0:01:00.699 ********** 2026-04-06 02:52:01.673356 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c) 2026-04-06 02:52:01.673365 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c) 2026-04-06 02:52:01.673373 | orchestrator | 2026-04-06 02:52:01.673382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-06 02:52:01.673390 | orchestrator | Monday 06 April 2026 02:52:00 +0000 (0:00:00.542) 0:01:01.241 ********** 2026-04-06 02:52:01.673399 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-06 02:52:01.673407 | orchestrator | 2026-04-06 02:52:01.673416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:01.673424 | orchestrator | Monday 06 April 2026 02:52:00 +0000 (0:00:00.392) 0:01:01.633 ********** 2026-04-06 02:52:01.673433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-06 02:52:01.673441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-06 02:52:01.673449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-06 02:52:01.673458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-06 02:52:01.673466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-06 02:52:01.673474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-06 02:52:01.673482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-06 02:52:01.673491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-06 02:52:01.673499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-06 02:52:01.673507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-06 02:52:01.673515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-06 02:52:01.673533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-06 02:52:11.855708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-06 02:52:11.855820 | orchestrator | 2026-04-06 02:52:11.855841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.855856 | orchestrator | Monday 06 April 2026 02:52:01 +0000 (0:00:00.736) 0:01:02.370 ********** 2026-04-06 02:52:11.855870 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.855885 | orchestrator | 2026-04-06 02:52:11.855900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.855931 | orchestrator | Monday 06 April 2026 02:52:01 +0000 (0:00:00.277) 0:01:02.647 ********** 2026-04-06 02:52:11.855946 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.855979 | orchestrator | 2026-04-06 02:52:11.855988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.855996 | orchestrator | Monday 06 April 2026 02:52:02 +0000 (0:00:00.234) 0:01:02.882 ********** 2026-04-06 02:52:11.856004 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856012 | orchestrator | 2026-04-06 02:52:11.856020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.856028 | orchestrator | Monday 06 April 2026 02:52:02 +0000 (0:00:00.227) 0:01:03.109 ********** 2026-04-06 02:52:11.856036 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856044 | orchestrator | 2026-04-06 02:52:11.856052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.856060 | orchestrator | Monday 06 April 2026 02:52:02 +0000 (0:00:00.247) 0:01:03.357 ********** 2026-04-06 02:52:11.856067 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856075 | orchestrator | 2026-04-06 02:52:11.856083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.856091 | orchestrator | Monday 06 April 2026 02:52:02 +0000 (0:00:00.283) 0:01:03.640 ********** 2026-04-06 02:52:11.856099 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856107 | orchestrator | 2026-04-06 02:52:11.856115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.856123 | orchestrator | Monday 06 April 2026 02:52:03 +0000 (0:00:00.252) 0:01:03.892 ********** 2026-04-06 02:52:11.856131 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856139 | orchestrator | 2026-04-06 02:52:11.856146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.856155 | orchestrator | Monday 06 April 2026 02:52:03 +0000 (0:00:00.243) 0:01:04.136 ********** 2026-04-06 02:52:11.856163 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856171 | orchestrator | 2026-04-06 02:52:11.856179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.856186 | orchestrator | Monday 06 April 2026 02:52:03 +0000 (0:00:00.216) 0:01:04.352 ********** 2026-04-06 02:52:11.856194 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-06 02:52:11.856203 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-06 02:52:11.856211 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-06 02:52:11.856219 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-06 02:52:11.856226 | orchestrator | 2026-04-06 02:52:11.856235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.856245 | orchestrator | Monday 06 April 2026 02:52:04 +0000 (0:00:01.057) 0:01:05.410 ********** 2026-04-06 02:52:11.856254 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856263 | orchestrator | 2026-04-06 02:52:11.856272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.856327 | orchestrator | Monday 06 April 2026 02:52:05 +0000 (0:00:00.804) 0:01:06.215 ********** 2026-04-06 02:52:11.856337 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856346 | orchestrator | 2026-04-06 02:52:11.856355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.856364 | orchestrator | Monday 06 April 2026 02:52:05 +0000 (0:00:00.245) 0:01:06.461 ********** 2026-04-06 02:52:11.856372 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856380 | orchestrator | 2026-04-06 02:52:11.856388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-06 02:52:11.856396 | orchestrator | Monday 06 April 2026 02:52:06 +0000 (0:00:00.255) 0:01:06.716 ********** 2026-04-06 02:52:11.856404 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856412 | orchestrator | 2026-04-06 02:52:11.856420 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-06 02:52:11.856428 | orchestrator | Monday 06 April 2026 02:52:06 +0000 (0:00:00.248) 0:01:06.965 ********** 2026-04-06 02:52:11.856436 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856444 | orchestrator | 2026-04-06 02:52:11.856460 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-06 02:52:11.856468 | orchestrator | Monday 06 April 2026 02:52:06 +0000 (0:00:00.143) 0:01:07.108 ********** 2026-04-06 02:52:11.856477 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}}) 2026-04-06 02:52:11.856485 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4d79f264-f564-5244-b3d4-1e30cd615742'}}) 2026-04-06 02:52:11.856493 | orchestrator | 2026-04-06 02:52:11.856502 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-06 02:52:11.856510 | orchestrator | Monday 06 April 2026 02:52:06 +0000 (0:00:00.220) 0:01:07.329 ********** 2026-04-06 02:52:11.856519 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}) 2026-04-06 02:52:11.856529 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'}) 2026-04-06 02:52:11.856537 | orchestrator | 2026-04-06 02:52:11.856545 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-06 02:52:11.856570 | orchestrator | Monday 06 April 2026 02:52:08 +0000 (0:00:01.894) 0:01:09.224 ********** 2026-04-06 02:52:11.856579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:11.856588 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:11.856596 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856604 | orchestrator | 2026-04-06 02:52:11.856617 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-06 02:52:11.856626 | orchestrator | Monday 06 April 2026 02:52:08 +0000 (0:00:00.158) 0:01:09.382 ********** 2026-04-06 02:52:11.856634 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}) 2026-04-06 02:52:11.856642 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'}) 2026-04-06 02:52:11.856649 | orchestrator | 2026-04-06 02:52:11.856657 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-06 02:52:11.856665 | orchestrator | Monday 06 April 2026 02:52:10 +0000 (0:00:01.392) 0:01:10.774 ********** 2026-04-06 02:52:11.856673 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:11.856681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:11.856689 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856697 | orchestrator | 2026-04-06 02:52:11.856705 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-06 02:52:11.856713 | orchestrator | Monday 06 April 2026 02:52:10 +0000 (0:00:00.165) 0:01:10.940 ********** 2026-04-06 02:52:11.856721 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856729 | orchestrator | 2026-04-06 02:52:11.856737 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-06 02:52:11.856745 | orchestrator | Monday 06 April 2026 02:52:10 +0000 (0:00:00.155) 0:01:11.095 ********** 2026-04-06 02:52:11.856753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:11.856761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:11.856775 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856783 | orchestrator | 2026-04-06 02:52:11.856791 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-06 02:52:11.856799 | orchestrator | Monday 06 April 2026 02:52:10 +0000 (0:00:00.395) 0:01:11.490 ********** 2026-04-06 02:52:11.856807 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856814 | orchestrator | 2026-04-06 02:52:11.856822 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-06 02:52:11.856830 | orchestrator | Monday 06 April 2026 02:52:10 +0000 (0:00:00.171) 0:01:11.661 ********** 2026-04-06 02:52:11.856838 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:11.856846 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:11.856854 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856862 | orchestrator | 2026-04-06 02:52:11.856870 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-06 02:52:11.856878 | orchestrator | Monday 06 April 2026 02:52:11 +0000 (0:00:00.185) 0:01:11.847 ********** 2026-04-06 02:52:11.856885 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856893 | orchestrator | 2026-04-06 02:52:11.856901 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-06 02:52:11.856909 | orchestrator | Monday 06 April 2026 02:52:11 +0000 (0:00:00.190) 0:01:12.038 ********** 2026-04-06 02:52:11.856917 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:11.856925 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:11.856933 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:11.856941 | orchestrator | 2026-04-06 02:52:11.856949 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-06 02:52:11.856957 | orchestrator | Monday 06 April 2026 02:52:11 +0000 (0:00:00.180) 0:01:12.218 ********** 2026-04-06 02:52:11.856965 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:11.856973 | orchestrator | 2026-04-06 02:52:11.856980 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-06 02:52:11.856988 | orchestrator | Monday 06 April 2026 02:52:11 +0000 (0:00:00.148) 0:01:12.367 ********** 2026-04-06 02:52:11.857008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:18.991088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:18.991203 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.991219 | orchestrator | 2026-04-06 02:52:18.991232 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-06 02:52:18.991245 | orchestrator | Monday 06 April 2026 02:52:11 +0000 (0:00:00.191) 0:01:12.558 ********** 2026-04-06 02:52:18.991274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:18.991351 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:18.991373 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.991392 | orchestrator | 2026-04-06 02:52:18.991405 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-06 02:52:18.991416 | orchestrator | Monday 06 April 2026 02:52:12 +0000 (0:00:00.192) 0:01:12.751 ********** 2026-04-06 02:52:18.991448 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:18.991460 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:18.991471 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.991482 | orchestrator | 2026-04-06 02:52:18.991493 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-06 02:52:18.991504 | orchestrator | Monday 06 April 2026 02:52:12 +0000 (0:00:00.185) 0:01:12.936 ********** 2026-04-06 02:52:18.991515 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.991526 | orchestrator | 2026-04-06 02:52:18.991536 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-06 02:52:18.991547 | orchestrator | Monday 06 April 2026 02:52:12 +0000 (0:00:00.162) 0:01:13.098 ********** 2026-04-06 02:52:18.991558 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.991570 | orchestrator | 2026-04-06 02:52:18.991581 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-06 02:52:18.991592 | orchestrator | Monday 06 April 2026 02:52:12 +0000 (0:00:00.166) 0:01:13.265 ********** 2026-04-06 02:52:18.991603 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.991616 | orchestrator | 2026-04-06 02:52:18.991628 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-06 02:52:18.991642 | orchestrator | Monday 06 April 2026 02:52:12 +0000 (0:00:00.408) 0:01:13.673 ********** 2026-04-06 02:52:18.991654 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 02:52:18.991668 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-06 02:52:18.991681 | orchestrator | } 2026-04-06 02:52:18.991693 | orchestrator | 2026-04-06 02:52:18.991706 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-06 02:52:18.991720 | orchestrator | Monday 06 April 2026 02:52:13 +0000 (0:00:00.166) 0:01:13.840 ********** 2026-04-06 02:52:18.991733 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 02:52:18.991745 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-06 02:52:18.991758 | orchestrator | } 2026-04-06 02:52:18.991770 | orchestrator | 2026-04-06 02:52:18.991783 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-06 02:52:18.991795 | orchestrator | Monday 06 April 2026 02:52:13 +0000 (0:00:00.166) 0:01:14.007 ********** 2026-04-06 02:52:18.991808 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 02:52:18.991820 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-06 02:52:18.991832 | orchestrator | } 2026-04-06 02:52:18.991845 | orchestrator | 2026-04-06 02:52:18.991858 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-06 02:52:18.991870 | orchestrator | Monday 06 April 2026 02:52:13 +0000 (0:00:00.179) 0:01:14.186 ********** 2026-04-06 02:52:18.991881 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:18.991892 | orchestrator | 2026-04-06 02:52:18.991903 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-06 02:52:18.991914 | orchestrator | Monday 06 April 2026 02:52:14 +0000 (0:00:00.582) 0:01:14.769 ********** 2026-04-06 02:52:18.991925 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:18.991936 | orchestrator | 2026-04-06 02:52:18.991947 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-06 02:52:18.991958 | orchestrator | Monday 06 April 2026 02:52:14 +0000 (0:00:00.530) 0:01:15.300 ********** 2026-04-06 02:52:18.991968 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:18.991979 | orchestrator | 2026-04-06 02:52:18.991990 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-06 02:52:18.992001 | orchestrator | Monday 06 April 2026 02:52:15 +0000 (0:00:00.518) 0:01:15.819 ********** 2026-04-06 02:52:18.992015 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:18.992033 | orchestrator | 2026-04-06 02:52:18.992061 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-06 02:52:18.992096 | orchestrator | Monday 06 April 2026 02:52:15 +0000 (0:00:00.174) 0:01:15.994 ********** 2026-04-06 02:52:18.992114 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992131 | orchestrator | 2026-04-06 02:52:18.992148 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-06 02:52:18.992166 | orchestrator | Monday 06 April 2026 02:52:15 +0000 (0:00:00.119) 0:01:16.114 ********** 2026-04-06 02:52:18.992185 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992204 | orchestrator | 2026-04-06 02:52:18.992216 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-06 02:52:18.992227 | orchestrator | Monday 06 April 2026 02:52:15 +0000 (0:00:00.119) 0:01:16.233 ********** 2026-04-06 02:52:18.992237 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 02:52:18.992248 | orchestrator |  "vgs_report": { 2026-04-06 02:52:18.992259 | orchestrator |  "vg": [] 2026-04-06 02:52:18.992350 | orchestrator |  } 2026-04-06 02:52:18.992365 | orchestrator | } 2026-04-06 02:52:18.992375 | orchestrator | 2026-04-06 02:52:18.992387 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-06 02:52:18.992398 | orchestrator | Monday 06 April 2026 02:52:15 +0000 (0:00:00.169) 0:01:16.403 ********** 2026-04-06 02:52:18.992409 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992420 | orchestrator | 2026-04-06 02:52:18.992438 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-06 02:52:18.992474 | orchestrator | Monday 06 April 2026 02:52:15 +0000 (0:00:00.155) 0:01:16.558 ********** 2026-04-06 02:52:18.992501 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992519 | orchestrator | 2026-04-06 02:52:18.992537 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-06 02:52:18.992557 | orchestrator | Monday 06 April 2026 02:52:16 +0000 (0:00:00.393) 0:01:16.952 ********** 2026-04-06 02:52:18.992575 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992595 | orchestrator | 2026-04-06 02:52:18.992615 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-06 02:52:18.992636 | orchestrator | Monday 06 April 2026 02:52:16 +0000 (0:00:00.160) 0:01:17.112 ********** 2026-04-06 02:52:18.992658 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992677 | orchestrator | 2026-04-06 02:52:18.992694 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-06 02:52:18.992705 | orchestrator | Monday 06 April 2026 02:52:16 +0000 (0:00:00.144) 0:01:17.257 ********** 2026-04-06 02:52:18.992716 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992726 | orchestrator | 2026-04-06 02:52:18.992737 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-06 02:52:18.992748 | orchestrator | Monday 06 April 2026 02:52:16 +0000 (0:00:00.161) 0:01:17.419 ********** 2026-04-06 02:52:18.992759 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992770 | orchestrator | 2026-04-06 02:52:18.992781 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-06 02:52:18.992792 | orchestrator | Monday 06 April 2026 02:52:16 +0000 (0:00:00.160) 0:01:17.579 ********** 2026-04-06 02:52:18.992802 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992813 | orchestrator | 2026-04-06 02:52:18.992824 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-06 02:52:18.992835 | orchestrator | Monday 06 April 2026 02:52:17 +0000 (0:00:00.157) 0:01:17.737 ********** 2026-04-06 02:52:18.992846 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992857 | orchestrator | 2026-04-06 02:52:18.992868 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-06 02:52:18.992879 | orchestrator | Monday 06 April 2026 02:52:17 +0000 (0:00:00.147) 0:01:17.885 ********** 2026-04-06 02:52:18.992890 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992901 | orchestrator | 2026-04-06 02:52:18.992912 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-06 02:52:18.992923 | orchestrator | Monday 06 April 2026 02:52:17 +0000 (0:00:00.160) 0:01:18.045 ********** 2026-04-06 02:52:18.992946 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.992957 | orchestrator | 2026-04-06 02:52:18.992968 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-06 02:52:18.992979 | orchestrator | Monday 06 April 2026 02:52:17 +0000 (0:00:00.153) 0:01:18.198 ********** 2026-04-06 02:52:18.992990 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.993001 | orchestrator | 2026-04-06 02:52:18.993012 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-06 02:52:18.993023 | orchestrator | Monday 06 April 2026 02:52:17 +0000 (0:00:00.166) 0:01:18.365 ********** 2026-04-06 02:52:18.993034 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.993045 | orchestrator | 2026-04-06 02:52:18.993056 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-06 02:52:18.993067 | orchestrator | Monday 06 April 2026 02:52:17 +0000 (0:00:00.161) 0:01:18.526 ********** 2026-04-06 02:52:18.993077 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.993088 | orchestrator | 2026-04-06 02:52:18.993099 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-06 02:52:18.993110 | orchestrator | Monday 06 April 2026 02:52:18 +0000 (0:00:00.425) 0:01:18.952 ********** 2026-04-06 02:52:18.993121 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.993132 | orchestrator | 2026-04-06 02:52:18.993143 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-06 02:52:18.993154 | orchestrator | Monday 06 April 2026 02:52:18 +0000 (0:00:00.214) 0:01:19.166 ********** 2026-04-06 02:52:18.993165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:18.993176 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:18.993187 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.993198 | orchestrator | 2026-04-06 02:52:18.993210 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-06 02:52:18.993221 | orchestrator | Monday 06 April 2026 02:52:18 +0000 (0:00:00.169) 0:01:19.336 ********** 2026-04-06 02:52:18.993232 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:18.993249 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:18.993272 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:18.993328 | orchestrator | 2026-04-06 02:52:18.993344 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-06 02:52:18.993361 | orchestrator | Monday 06 April 2026 02:52:18 +0000 (0:00:00.178) 0:01:19.514 ********** 2026-04-06 02:52:18.993392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:22.356604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:22.356725 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:22.356751 | orchestrator | 2026-04-06 02:52:22.356793 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-06 02:52:22.356815 | orchestrator | Monday 06 April 2026 02:52:18 +0000 (0:00:00.183) 0:01:19.697 ********** 2026-04-06 02:52:22.356835 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:22.356856 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:22.356902 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:22.356923 | orchestrator | 2026-04-06 02:52:22.356941 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-06 02:52:22.356962 | orchestrator | Monday 06 April 2026 02:52:19 +0000 (0:00:00.177) 0:01:19.875 ********** 2026-04-06 02:52:22.356981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:22.357000 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:22.357020 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:22.357039 | orchestrator | 2026-04-06 02:52:22.357057 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-06 02:52:22.357070 | orchestrator | Monday 06 April 2026 02:52:19 +0000 (0:00:00.164) 0:01:20.039 ********** 2026-04-06 02:52:22.357087 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:22.357105 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:22.357124 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:22.357142 | orchestrator | 2026-04-06 02:52:22.357160 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-06 02:52:22.357215 | orchestrator | Monday 06 April 2026 02:52:19 +0000 (0:00:00.185) 0:01:20.225 ********** 2026-04-06 02:52:22.357233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:22.357252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:22.357271 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:22.357322 | orchestrator | 2026-04-06 02:52:22.357342 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-06 02:52:22.357361 | orchestrator | Monday 06 April 2026 02:52:19 +0000 (0:00:00.177) 0:01:20.403 ********** 2026-04-06 02:52:22.357381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:22.357400 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:22.357419 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:22.357438 | orchestrator | 2026-04-06 02:52:22.357458 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-06 02:52:22.357478 | orchestrator | Monday 06 April 2026 02:52:19 +0000 (0:00:00.171) 0:01:20.575 ********** 2026-04-06 02:52:22.357497 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:22.357517 | orchestrator | 2026-04-06 02:52:22.357535 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-06 02:52:22.357554 | orchestrator | Monday 06 April 2026 02:52:20 +0000 (0:00:00.542) 0:01:21.117 ********** 2026-04-06 02:52:22.357566 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:22.357577 | orchestrator | 2026-04-06 02:52:22.357588 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-06 02:52:22.357600 | orchestrator | Monday 06 April 2026 02:52:21 +0000 (0:00:00.806) 0:01:21.924 ********** 2026-04-06 02:52:22.357611 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:22.357622 | orchestrator | 2026-04-06 02:52:22.357632 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-06 02:52:22.357644 | orchestrator | Monday 06 April 2026 02:52:21 +0000 (0:00:00.167) 0:01:22.092 ********** 2026-04-06 02:52:22.357669 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'vg_name': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'}) 2026-04-06 02:52:22.357682 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'vg_name': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}) 2026-04-06 02:52:22.357693 | orchestrator | 2026-04-06 02:52:22.357703 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-06 02:52:22.357718 | orchestrator | Monday 06 April 2026 02:52:21 +0000 (0:00:00.204) 0:01:22.296 ********** 2026-04-06 02:52:22.357763 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:22.357794 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:22.357815 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:22.357834 | orchestrator | 2026-04-06 02:52:22.357853 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-06 02:52:22.357870 | orchestrator | Monday 06 April 2026 02:52:21 +0000 (0:00:00.192) 0:01:22.489 ********** 2026-04-06 02:52:22.357889 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:22.357909 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:22.357927 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:22.357945 | orchestrator | 2026-04-06 02:52:22.357962 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-06 02:52:22.357979 | orchestrator | Monday 06 April 2026 02:52:21 +0000 (0:00:00.182) 0:01:22.671 ********** 2026-04-06 02:52:22.357995 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 02:52:22.358012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 02:52:22.358115 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:22.358134 | orchestrator | 2026-04-06 02:52:22.358154 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-06 02:52:22.358167 | orchestrator | Monday 06 April 2026 02:52:22 +0000 (0:00:00.200) 0:01:22.872 ********** 2026-04-06 02:52:22.358178 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 02:52:22.358224 | orchestrator |  "lvm_report": { 2026-04-06 02:52:22.358237 | orchestrator |  "lv": [ 2026-04-06 02:52:22.358248 | orchestrator |  { 2026-04-06 02:52:22.358259 | orchestrator |  "lv_name": "osd-block-4d79f264-f564-5244-b3d4-1e30cd615742", 2026-04-06 02:52:22.358271 | orchestrator |  "vg_name": "ceph-4d79f264-f564-5244-b3d4-1e30cd615742" 2026-04-06 02:52:22.358282 | orchestrator |  }, 2026-04-06 02:52:22.358330 | orchestrator |  { 2026-04-06 02:52:22.358343 | orchestrator |  "lv_name": "osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447", 2026-04-06 02:52:22.358354 | orchestrator |  "vg_name": "ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447" 2026-04-06 02:52:22.358365 | orchestrator |  } 2026-04-06 02:52:22.358376 | orchestrator |  ], 2026-04-06 02:52:22.358387 | orchestrator |  "pv": [ 2026-04-06 02:52:22.358398 | orchestrator |  { 2026-04-06 02:52:22.358408 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-06 02:52:22.358419 | orchestrator |  "vg_name": "ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447" 2026-04-06 02:52:22.358430 | orchestrator |  }, 2026-04-06 02:52:22.358441 | orchestrator |  { 2026-04-06 02:52:22.358452 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-06 02:52:22.358480 | orchestrator |  "vg_name": "ceph-4d79f264-f564-5244-b3d4-1e30cd615742" 2026-04-06 02:52:22.358491 | orchestrator |  } 2026-04-06 02:52:22.358502 | orchestrator |  ] 2026-04-06 02:52:22.358513 | orchestrator |  } 2026-04-06 02:52:22.358524 | orchestrator | } 2026-04-06 02:52:22.358535 | orchestrator | 2026-04-06 02:52:22.358546 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:52:22.358557 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-06 02:52:22.358569 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-06 02:52:22.358580 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-06 02:52:22.358591 | orchestrator | 2026-04-06 02:52:22.358602 | orchestrator | 2026-04-06 02:52:22.358613 | orchestrator | 2026-04-06 02:52:22.358624 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:52:22.358635 | orchestrator | Monday 06 April 2026 02:52:22 +0000 (0:00:00.165) 0:01:23.038 ********** 2026-04-06 02:52:22.358645 | orchestrator | =============================================================================== 2026-04-06 02:52:22.358657 | orchestrator | Create block VGs -------------------------------------------------------- 5.89s 2026-04-06 02:52:22.358667 | orchestrator | Create block LVs -------------------------------------------------------- 4.31s 2026-04-06 02:52:22.358678 | orchestrator | Add known partitions to the list of available block devices ------------- 2.26s 2026-04-06 02:52:22.358689 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.92s 2026-04-06 02:52:22.358700 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.81s 2026-04-06 02:52:22.358711 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.68s 2026-04-06 02:52:22.358722 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.66s 2026-04-06 02:52:22.358733 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2026-04-06 02:52:22.358757 | orchestrator | Add known links to the list of available block devices ------------------ 1.56s 2026-04-06 02:52:22.802729 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2026-04-06 02:52:22.802867 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2026-04-06 02:52:22.802890 | orchestrator | Add known links to the list of available block devices ------------------ 1.01s 2026-04-06 02:52:22.802924 | orchestrator | Fail if number of OSDs exceeds num_osds for a DB+WAL VG ----------------- 0.97s 2026-04-06 02:52:22.802936 | orchestrator | Calculate size needed for LVs on ceph_db_devices ------------------------ 0.93s 2026-04-06 02:52:22.802948 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.88s 2026-04-06 02:52:22.802959 | orchestrator | Print LVM report data --------------------------------------------------- 0.85s 2026-04-06 02:52:22.802969 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.83s 2026-04-06 02:52:22.802980 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.81s 2026-04-06 02:52:22.802991 | orchestrator | Get initial list of available block devices ----------------------------- 0.81s 2026-04-06 02:52:22.803002 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2026-04-06 02:52:35.475217 | orchestrator | 2026-04-06 02:52:35 | INFO  | Task 9bd7054e-3f2b-46af-8f16-67406e762d3b (facts) was prepared for execution. 2026-04-06 02:52:35.475351 | orchestrator | 2026-04-06 02:52:35 | INFO  | It takes a moment until task 9bd7054e-3f2b-46af-8f16-67406e762d3b (facts) has been started and output is visible here. 2026-04-06 02:52:50.005166 | orchestrator | 2026-04-06 02:52:50.005283 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-06 02:52:50.005369 | orchestrator | 2026-04-06 02:52:50.005382 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-06 02:52:50.005392 | orchestrator | Monday 06 April 2026 02:52:40 +0000 (0:00:00.302) 0:00:00.302 ********** 2026-04-06 02:52:50.005403 | orchestrator | ok: [testbed-manager] 2026-04-06 02:52:50.005415 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:52:50.005425 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:52:50.005436 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:52:50.005443 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:52:50.005449 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:52:50.005455 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:50.005462 | orchestrator | 2026-04-06 02:52:50.005469 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-06 02:52:50.005479 | orchestrator | Monday 06 April 2026 02:52:41 +0000 (0:00:01.247) 0:00:01.550 ********** 2026-04-06 02:52:50.005490 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:52:50.005500 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:52:50.005510 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:52:50.005520 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:52:50.005529 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:52:50.005538 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:52:50.005548 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:50.005559 | orchestrator | 2026-04-06 02:52:50.005569 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-06 02:52:50.005580 | orchestrator | 2026-04-06 02:52:50.005590 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-06 02:52:50.005601 | orchestrator | Monday 06 April 2026 02:52:42 +0000 (0:00:01.449) 0:00:02.999 ********** 2026-04-06 02:52:50.005609 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:52:50.005616 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:52:50.005622 | orchestrator | ok: [testbed-manager] 2026-04-06 02:52:50.005629 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:52:50.005635 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:52:50.005641 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:52:50.005647 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:52:50.005654 | orchestrator | 2026-04-06 02:52:50.005660 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-06 02:52:50.005666 | orchestrator | 2026-04-06 02:52:50.005673 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-06 02:52:50.005680 | orchestrator | Monday 06 April 2026 02:52:48 +0000 (0:00:05.745) 0:00:08.744 ********** 2026-04-06 02:52:50.005687 | orchestrator | skipping: [testbed-manager] 2026-04-06 02:52:50.005694 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:52:50.005702 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:52:50.005709 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:52:50.005716 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:52:50.005723 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:52:50.005731 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:52:50.005737 | orchestrator | 2026-04-06 02:52:50.005745 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 02:52:50.005753 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:52:50.005761 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:52:50.005769 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:52:50.005781 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:52:50.005792 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:52:50.005885 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:52:50.005899 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 02:52:50.005910 | orchestrator | 2026-04-06 02:52:50.005921 | orchestrator | 2026-04-06 02:52:50.005930 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 02:52:50.005957 | orchestrator | Monday 06 April 2026 02:52:49 +0000 (0:00:00.748) 0:00:09.493 ********** 2026-04-06 02:52:50.005968 | orchestrator | =============================================================================== 2026-04-06 02:52:50.005978 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.75s 2026-04-06 02:52:50.005988 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.45s 2026-04-06 02:52:50.005997 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-04-06 02:52:50.006007 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.75s 2026-04-06 02:52:52.764473 | orchestrator | 2026-04-06 02:52:52 | INFO  | Task f9c27647-b736-418c-815b-45b722b4800f (ceph) was prepared for execution. 2026-04-06 02:52:52.764562 | orchestrator | 2026-04-06 02:52:52 | INFO  | It takes a moment until task f9c27647-b736-418c-815b-45b722b4800f (ceph) has been started and output is visible here. 2026-04-06 02:53:12.876192 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-06 02:53:12.876424 | orchestrator | 2.16.14 2026-04-06 02:53:12.876454 | orchestrator | 2026-04-06 02:53:12.876472 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-06 02:53:12.876492 | orchestrator | 2026-04-06 02:53:12.876510 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 02:53:12.876529 | orchestrator | Monday 06 April 2026 02:52:58 +0000 (0:00:00.909) 0:00:00.909 ********** 2026-04-06 02:53:12.876550 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:53:12.876568 | orchestrator | 2026-04-06 02:53:12.876586 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 02:53:12.876603 | orchestrator | Monday 06 April 2026 02:52:59 +0000 (0:00:01.323) 0:00:02.233 ********** 2026-04-06 02:53:12.876621 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:12.876639 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:12.876658 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:12.876712 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:12.876733 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:12.876754 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:12.876802 | orchestrator | 2026-04-06 02:53:12.876837 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 02:53:12.876859 | orchestrator | Monday 06 April 2026 02:53:01 +0000 (0:00:01.326) 0:00:03.559 ********** 2026-04-06 02:53:12.876881 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:12.876902 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:12.876924 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:12.876945 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:12.876964 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:12.876984 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:12.877004 | orchestrator | 2026-04-06 02:53:12.877024 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 02:53:12.877044 | orchestrator | Monday 06 April 2026 02:53:01 +0000 (0:00:00.839) 0:00:04.399 ********** 2026-04-06 02:53:12.877064 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:12.877084 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:12.877104 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:12.877124 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:12.877176 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:12.877195 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:12.877215 | orchestrator | 2026-04-06 02:53:12.877235 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 02:53:12.877255 | orchestrator | Monday 06 April 2026 02:53:03 +0000 (0:00:01.038) 0:00:05.437 ********** 2026-04-06 02:53:12.877275 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:12.877294 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:12.877314 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:12.877449 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:12.877469 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:12.877489 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:12.877509 | orchestrator | 2026-04-06 02:53:12.877528 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 02:53:12.877549 | orchestrator | Monday 06 April 2026 02:53:03 +0000 (0:00:00.853) 0:00:06.291 ********** 2026-04-06 02:53:12.877568 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:12.877588 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:12.877607 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:12.877627 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:12.877647 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:12.877667 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:12.877687 | orchestrator | 2026-04-06 02:53:12.877707 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 02:53:12.877727 | orchestrator | Monday 06 April 2026 02:53:04 +0000 (0:00:00.637) 0:00:06.928 ********** 2026-04-06 02:53:12.877747 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:12.877764 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:12.877782 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:12.877800 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:12.877818 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:12.877835 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:12.877852 | orchestrator | 2026-04-06 02:53:12.877870 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 02:53:12.877887 | orchestrator | Monday 06 April 2026 02:53:05 +0000 (0:00:00.910) 0:00:07.839 ********** 2026-04-06 02:53:12.877905 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:12.877923 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:12.877941 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:12.877958 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:12.877977 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:12.877995 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:12.878012 | orchestrator | 2026-04-06 02:53:12.878113 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 02:53:12.878131 | orchestrator | Monday 06 April 2026 02:53:06 +0000 (0:00:00.739) 0:00:08.579 ********** 2026-04-06 02:53:12.878150 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:12.878214 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:12.878233 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:12.878254 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:12.878294 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:12.878314 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:12.878356 | orchestrator | 2026-04-06 02:53:12.878374 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 02:53:12.878392 | orchestrator | Monday 06 April 2026 02:53:07 +0000 (0:00:00.917) 0:00:09.496 ********** 2026-04-06 02:53:12.878409 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 02:53:12.878427 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 02:53:12.878445 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 02:53:12.878461 | orchestrator | 2026-04-06 02:53:12.878479 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 02:53:12.878497 | orchestrator | Monday 06 April 2026 02:53:07 +0000 (0:00:00.826) 0:00:10.323 ********** 2026-04-06 02:53:12.878529 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:12.878547 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:12.878564 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:12.878615 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:12.878634 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:12.878652 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:12.878669 | orchestrator | 2026-04-06 02:53:12.878687 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 02:53:12.878705 | orchestrator | Monday 06 April 2026 02:53:08 +0000 (0:00:00.879) 0:00:11.202 ********** 2026-04-06 02:53:12.878723 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 02:53:12.878740 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 02:53:12.878758 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 02:53:12.878776 | orchestrator | 2026-04-06 02:53:12.878794 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 02:53:12.878811 | orchestrator | Monday 06 April 2026 02:53:11 +0000 (0:00:02.537) 0:00:13.740 ********** 2026-04-06 02:53:12.878829 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 02:53:12.878847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 02:53:12.878865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 02:53:12.878883 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:12.878901 | orchestrator | 2026-04-06 02:53:12.878918 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 02:53:12.878936 | orchestrator | Monday 06 April 2026 02:53:11 +0000 (0:00:00.469) 0:00:14.209 ********** 2026-04-06 02:53:12.878954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 02:53:12.878973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 02:53:12.878988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 02:53:12.879003 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:12.879018 | orchestrator | 2026-04-06 02:53:12.879032 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 02:53:12.879047 | orchestrator | Monday 06 April 2026 02:53:12 +0000 (0:00:00.692) 0:00:14.902 ********** 2026-04-06 02:53:12.879067 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:12.879087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:12.879104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:12.879134 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:12.879151 | orchestrator | 2026-04-06 02:53:12.879176 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 02:53:12.879192 | orchestrator | Monday 06 April 2026 02:53:12 +0000 (0:00:00.178) 0:00:15.080 ********** 2026-04-06 02:53:12.879225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 02:53:09.763932', 'end': '2026-04-06 02:53:09.800996', 'delta': '0:00:00.037064', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 02:53:23.682520 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 02:53:10.314757', 'end': '2026-04-06 02:53:10.366830', 'delta': '0:00:00.052073', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 02:53:23.682626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 02:53:10.882692', 'end': '2026-04-06 02:53:10.932804', 'delta': '0:00:00.050112', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 02:53:23.682640 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.682651 | orchestrator | 2026-04-06 02:53:23.682660 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 02:53:23.682670 | orchestrator | Monday 06 April 2026 02:53:12 +0000 (0:00:00.187) 0:00:15.268 ********** 2026-04-06 02:53:23.682678 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:23.682687 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:23.682695 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:23.682703 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:23.682711 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:23.682719 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:23.682727 | orchestrator | 2026-04-06 02:53:23.682736 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 02:53:23.682744 | orchestrator | Monday 06 April 2026 02:53:13 +0000 (0:00:00.907) 0:00:16.176 ********** 2026-04-06 02:53:23.682753 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 02:53:23.682761 | orchestrator | 2026-04-06 02:53:23.682770 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 02:53:23.682778 | orchestrator | Monday 06 April 2026 02:53:14 +0000 (0:00:00.946) 0:00:17.122 ********** 2026-04-06 02:53:23.682806 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.682814 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:23.682822 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:23.682831 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:23.682839 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:23.682847 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:23.682855 | orchestrator | 2026-04-06 02:53:23.682863 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 02:53:23.682872 | orchestrator | Monday 06 April 2026 02:53:15 +0000 (0:00:00.934) 0:00:18.056 ********** 2026-04-06 02:53:23.682880 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.682888 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:23.682896 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:23.682904 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:23.682912 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:23.682921 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:23.682929 | orchestrator | 2026-04-06 02:53:23.682937 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 02:53:23.682945 | orchestrator | Monday 06 April 2026 02:53:16 +0000 (0:00:01.319) 0:00:19.376 ********** 2026-04-06 02:53:23.682953 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.682961 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:23.682969 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:23.682977 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:23.682985 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:23.683007 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:23.683016 | orchestrator | 2026-04-06 02:53:23.683024 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 02:53:23.683033 | orchestrator | Monday 06 April 2026 02:53:17 +0000 (0:00:00.680) 0:00:20.057 ********** 2026-04-06 02:53:23.683045 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.683059 | orchestrator | 2026-04-06 02:53:23.683074 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 02:53:23.683085 | orchestrator | Monday 06 April 2026 02:53:17 +0000 (0:00:00.134) 0:00:20.192 ********** 2026-04-06 02:53:23.683095 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.683104 | orchestrator | 2026-04-06 02:53:23.683114 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 02:53:23.683123 | orchestrator | Monday 06 April 2026 02:53:18 +0000 (0:00:00.231) 0:00:20.423 ********** 2026-04-06 02:53:23.683133 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.683142 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:23.683151 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:23.683161 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:23.683170 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:23.683180 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:23.683189 | orchestrator | 2026-04-06 02:53:23.683215 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 02:53:23.683225 | orchestrator | Monday 06 April 2026 02:53:18 +0000 (0:00:00.859) 0:00:21.282 ********** 2026-04-06 02:53:23.683235 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.683244 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:23.683253 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:23.683262 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:23.683270 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:23.683279 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:23.683288 | orchestrator | 2026-04-06 02:53:23.683297 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 02:53:23.683306 | orchestrator | Monday 06 April 2026 02:53:19 +0000 (0:00:00.689) 0:00:21.971 ********** 2026-04-06 02:53:23.683315 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.683345 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:23.683355 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:23.683373 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:23.683383 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:23.683392 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:23.683402 | orchestrator | 2026-04-06 02:53:23.683411 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 02:53:23.683420 | orchestrator | Monday 06 April 2026 02:53:20 +0000 (0:00:00.882) 0:00:22.854 ********** 2026-04-06 02:53:23.683428 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.683436 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:23.683444 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:23.683452 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:23.683460 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:23.683468 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:23.683476 | orchestrator | 2026-04-06 02:53:23.683484 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 02:53:23.683492 | orchestrator | Monday 06 April 2026 02:53:21 +0000 (0:00:00.665) 0:00:23.519 ********** 2026-04-06 02:53:23.683500 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.683508 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:23.683516 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:23.683524 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:23.683532 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:23.683540 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:23.683548 | orchestrator | 2026-04-06 02:53:23.683556 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 02:53:23.683564 | orchestrator | Monday 06 April 2026 02:53:21 +0000 (0:00:00.867) 0:00:24.387 ********** 2026-04-06 02:53:23.683572 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.683580 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:23.683588 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:23.683596 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:23.683604 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:23.683612 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:23.683620 | orchestrator | 2026-04-06 02:53:23.683628 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 02:53:23.683637 | orchestrator | Monday 06 April 2026 02:53:22 +0000 (0:00:00.664) 0:00:25.051 ********** 2026-04-06 02:53:23.683645 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:23.683653 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:23.683661 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:23.683669 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:23.683677 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:23.683685 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:23.683693 | orchestrator | 2026-04-06 02:53:23.683701 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 02:53:23.683709 | orchestrator | Monday 06 April 2026 02:53:23 +0000 (0:00:00.891) 0:00:25.942 ********** 2026-04-06 02:53:23.683719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.683733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.683754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.794097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.794194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.794207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.794212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.794217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.794222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.794225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.794260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:23.794282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:23.794290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.794294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:23.794305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:23.794314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.051640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.051757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.051781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.051799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.051817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.051834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.051898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.051917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.051934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.051951 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:24.052012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.052035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.052074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.052101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.203729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.203837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.203855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.203870 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.203917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.203946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.203960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.203972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.204011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.204030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.204043 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:24.204056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.204079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.204104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.204126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.438407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.438489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.438520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.438531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.438550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.438557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.438563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.438570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.438591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.438598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.438611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.438625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.438632 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:24.438640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.438656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.438667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.710279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.710288 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:24.710295 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:24.710302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.710404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 02:53:24.938674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.938776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 02:53:24.938791 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:24.938801 | orchestrator | 2026-04-06 02:53:24.938810 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 02:53:24.938819 | orchestrator | Monday 06 April 2026 02:53:24 +0000 (0:00:01.152) 0:00:27.094 ********** 2026-04-06 02:53:24.938830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:24.938876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:24.938886 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:24.938896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:24.938909 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:24.938918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:24.938926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:24.938947 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.006187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.006269 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.006293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.006301 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.006307 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.006354 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.006386 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.006396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.006404 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.006422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.331106 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.331236 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.331257 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.331270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.331307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.331319 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.331407 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.331433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.331458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.331480 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.617996 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618123 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618152 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:25.618161 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618169 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618197 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618219 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618237 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618244 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618256 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618262 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:25.618269 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618275 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618282 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.618301 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737250 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737458 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737478 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737492 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737529 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737562 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737576 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737678 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737705 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737717 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737741 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.737764 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.943664 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.943779 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.943819 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:25.943834 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.943895 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.943909 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.943920 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.943932 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.943951 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.943970 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.943982 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:25.944005 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.181829 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.181934 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:26.181950 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:26.181963 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.181977 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.181989 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.182000 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.182072 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.182155 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.182170 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.182181 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.182196 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:26.182232 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 02:53:38.812689 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:38.812800 | orchestrator | 2026-04-06 02:53:38.812816 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 02:53:38.812828 | orchestrator | Monday 06 April 2026 02:53:26 +0000 (0:00:01.477) 0:00:28.572 ********** 2026-04-06 02:53:38.812840 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:38.812852 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:38.812863 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:38.812875 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:38.812886 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:38.812897 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:38.812908 | orchestrator | 2026-04-06 02:53:38.812919 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 02:53:38.812930 | orchestrator | Monday 06 April 2026 02:53:27 +0000 (0:00:00.973) 0:00:29.545 ********** 2026-04-06 02:53:38.812941 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:38.812952 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:38.812963 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:38.812974 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:53:38.812985 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:53:38.812996 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:53:38.813007 | orchestrator | 2026-04-06 02:53:38.813018 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 02:53:38.813029 | orchestrator | Monday 06 April 2026 02:53:28 +0000 (0:00:00.897) 0:00:30.443 ********** 2026-04-06 02:53:38.813040 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.813051 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:38.813062 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:38.813073 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:38.813084 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:38.813095 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:38.813106 | orchestrator | 2026-04-06 02:53:38.813117 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 02:53:38.813129 | orchestrator | Monday 06 April 2026 02:53:28 +0000 (0:00:00.637) 0:00:31.081 ********** 2026-04-06 02:53:38.813140 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.813151 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:38.813162 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:38.813173 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:38.813184 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:38.813195 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:38.813206 | orchestrator | 2026-04-06 02:53:38.813217 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 02:53:38.813228 | orchestrator | Monday 06 April 2026 02:53:29 +0000 (0:00:00.915) 0:00:31.996 ********** 2026-04-06 02:53:38.813239 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.813250 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:38.813261 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:38.813294 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:38.813305 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:38.813316 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:38.813327 | orchestrator | 2026-04-06 02:53:38.813425 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 02:53:38.813447 | orchestrator | Monday 06 April 2026 02:53:30 +0000 (0:00:00.686) 0:00:32.682 ********** 2026-04-06 02:53:38.813465 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.813484 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:38.813495 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:38.813506 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:38.813516 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:38.813527 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:38.813538 | orchestrator | 2026-04-06 02:53:38.813549 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 02:53:38.813560 | orchestrator | Monday 06 April 2026 02:53:31 +0000 (0:00:00.903) 0:00:33.586 ********** 2026-04-06 02:53:38.813571 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-06 02:53:38.813582 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-06 02:53:38.813593 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-06 02:53:38.813604 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-06 02:53:38.813615 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-06 02:53:38.813626 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-06 02:53:38.813636 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-06 02:53:38.813647 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 02:53:38.813658 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-06 02:53:38.813669 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-06 02:53:38.813680 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-06 02:53:38.813691 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-06 02:53:38.813701 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-06 02:53:38.813712 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 02:53:38.813723 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-06 02:53:38.813734 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-06 02:53:38.813745 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-06 02:53:38.813777 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 02:53:38.813797 | orchestrator | 2026-04-06 02:53:38.813816 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 02:53:38.813836 | orchestrator | Monday 06 April 2026 02:53:32 +0000 (0:00:01.801) 0:00:35.388 ********** 2026-04-06 02:53:38.813848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 02:53:38.813860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 02:53:38.813871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 02:53:38.813882 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.813893 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-06 02:53:38.813904 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-06 02:53:38.813914 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-06 02:53:38.813945 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:38.813957 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-06 02:53:38.813968 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-06 02:53:38.813979 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-06 02:53:38.813989 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:38.814000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 02:53:38.814011 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 02:53:38.814089 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 02:53:38.814101 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:38.814112 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 02:53:38.814123 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 02:53:38.814134 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 02:53:38.814144 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:38.814155 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 02:53:38.814166 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 02:53:38.814177 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 02:53:38.814188 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:38.814199 | orchestrator | 2026-04-06 02:53:38.814210 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 02:53:38.814221 | orchestrator | Monday 06 April 2026 02:53:34 +0000 (0:00:01.033) 0:00:36.421 ********** 2026-04-06 02:53:38.814232 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:53:38.814242 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:53:38.814253 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:53:38.814265 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:53:38.814276 | orchestrator | 2026-04-06 02:53:38.814287 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 02:53:38.814300 | orchestrator | Monday 06 April 2026 02:53:35 +0000 (0:00:01.162) 0:00:37.583 ********** 2026-04-06 02:53:38.814311 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.814322 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:38.814351 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:38.814363 | orchestrator | 2026-04-06 02:53:38.814374 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 02:53:38.814386 | orchestrator | Monday 06 April 2026 02:53:35 +0000 (0:00:00.385) 0:00:37.968 ********** 2026-04-06 02:53:38.814397 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.814408 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:38.814419 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:38.814431 | orchestrator | 2026-04-06 02:53:38.814450 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 02:53:38.814477 | orchestrator | Monday 06 April 2026 02:53:35 +0000 (0:00:00.382) 0:00:38.351 ********** 2026-04-06 02:53:38.814499 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.814518 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:53:38.814537 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:53:38.814557 | orchestrator | 2026-04-06 02:53:38.814574 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 02:53:38.814591 | orchestrator | Monday 06 April 2026 02:53:36 +0000 (0:00:00.356) 0:00:38.708 ********** 2026-04-06 02:53:38.814611 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:38.814631 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:38.814650 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:38.814669 | orchestrator | 2026-04-06 02:53:38.814688 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 02:53:38.814708 | orchestrator | Monday 06 April 2026 02:53:37 +0000 (0:00:00.780) 0:00:39.489 ********** 2026-04-06 02:53:38.814729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:53:38.814749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:53:38.814770 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:53:38.814790 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.814809 | orchestrator | 2026-04-06 02:53:38.814822 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 02:53:38.814844 | orchestrator | Monday 06 April 2026 02:53:37 +0000 (0:00:00.430) 0:00:39.920 ********** 2026-04-06 02:53:38.814855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:53:38.814866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:53:38.814877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:53:38.814888 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.814899 | orchestrator | 2026-04-06 02:53:38.814910 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 02:53:38.814921 | orchestrator | Monday 06 April 2026 02:53:37 +0000 (0:00:00.435) 0:00:40.355 ********** 2026-04-06 02:53:38.814940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:53:38.814951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:53:38.814962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:53:38.814973 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:53:38.814984 | orchestrator | 2026-04-06 02:53:38.814995 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 02:53:38.815008 | orchestrator | Monday 06 April 2026 02:53:38 +0000 (0:00:00.443) 0:00:40.799 ********** 2026-04-06 02:53:38.815026 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:53:38.815047 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:53:38.815074 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:53:38.815092 | orchestrator | 2026-04-06 02:53:38.815109 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 02:53:38.815144 | orchestrator | Monday 06 April 2026 02:53:38 +0000 (0:00:00.403) 0:00:41.203 ********** 2026-04-06 02:54:00.015453 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-06 02:54:00.015573 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 02:54:00.015595 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-06 02:54:00.015614 | orchestrator | 2026-04-06 02:54:00.015633 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 02:54:00.015651 | orchestrator | Monday 06 April 2026 02:53:39 +0000 (0:00:01.099) 0:00:42.302 ********** 2026-04-06 02:54:00.015668 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 02:54:00.015686 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 02:54:00.015703 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 02:54:00.015721 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-06 02:54:00.015741 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 02:54:00.015760 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 02:54:00.015779 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 02:54:00.015797 | orchestrator | 2026-04-06 02:54:00.015817 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 02:54:00.015837 | orchestrator | Monday 06 April 2026 02:53:40 +0000 (0:00:00.958) 0:00:43.260 ********** 2026-04-06 02:54:00.015856 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 02:54:00.015870 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 02:54:00.015883 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 02:54:00.015897 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-06 02:54:00.015910 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 02:54:00.015923 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 02:54:00.015937 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 02:54:00.015956 | orchestrator | 2026-04-06 02:54:00.016006 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 02:54:00.016026 | orchestrator | Monday 06 April 2026 02:53:42 +0000 (0:00:02.134) 0:00:45.395 ********** 2026-04-06 02:54:00.016046 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:54:00.016065 | orchestrator | 2026-04-06 02:54:00.016083 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 02:54:00.016102 | orchestrator | Monday 06 April 2026 02:53:44 +0000 (0:00:01.423) 0:00:46.818 ********** 2026-04-06 02:54:00.016121 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:54:00.016141 | orchestrator | 2026-04-06 02:54:00.016160 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 02:54:00.016176 | orchestrator | Monday 06 April 2026 02:53:45 +0000 (0:00:01.352) 0:00:48.171 ********** 2026-04-06 02:54:00.016188 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:54:00.016200 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:54:00.016211 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:54:00.016222 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:54:00.016233 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:54:00.016244 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:54:00.016256 | orchestrator | 2026-04-06 02:54:00.016267 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 02:54:00.016278 | orchestrator | Monday 06 April 2026 02:53:47 +0000 (0:00:01.285) 0:00:49.457 ********** 2026-04-06 02:54:00.016290 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:54:00.016301 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:54:00.016312 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:54:00.016323 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:54:00.016335 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:54:00.016372 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:54:00.016387 | orchestrator | 2026-04-06 02:54:00.016398 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 02:54:00.016410 | orchestrator | Monday 06 April 2026 02:53:47 +0000 (0:00:00.727) 0:00:50.185 ********** 2026-04-06 02:54:00.016421 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:54:00.016433 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:54:00.016444 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:54:00.016455 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:54:00.016466 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:54:00.016493 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:54:00.016505 | orchestrator | 2026-04-06 02:54:00.016516 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 02:54:00.016528 | orchestrator | Monday 06 April 2026 02:53:48 +0000 (0:00:00.961) 0:00:51.146 ********** 2026-04-06 02:54:00.016539 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:54:00.016550 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:54:00.016561 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:54:00.016572 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:54:00.016584 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:54:00.016595 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:54:00.016606 | orchestrator | 2026-04-06 02:54:00.016617 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 02:54:00.016629 | orchestrator | Monday 06 April 2026 02:53:49 +0000 (0:00:00.781) 0:00:51.928 ********** 2026-04-06 02:54:00.016640 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:54:00.016651 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:54:00.016685 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:54:00.016697 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:54:00.016708 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:54:00.016743 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:54:00.016754 | orchestrator | 2026-04-06 02:54:00.016765 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 02:54:00.016785 | orchestrator | Monday 06 April 2026 02:53:50 +0000 (0:00:01.315) 0:00:53.243 ********** 2026-04-06 02:54:00.016797 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:54:00.016808 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:54:00.016819 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:54:00.016830 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:54:00.016841 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:54:00.016852 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:54:00.016863 | orchestrator | 2026-04-06 02:54:00.016874 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 02:54:00.016885 | orchestrator | Monday 06 April 2026 02:53:51 +0000 (0:00:00.681) 0:00:53.925 ********** 2026-04-06 02:54:00.016896 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:54:00.016907 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:54:00.016918 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:54:00.016929 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:54:00.016940 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:54:00.016959 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:54:00.016977 | orchestrator | 2026-04-06 02:54:00.016995 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 02:54:00.017013 | orchestrator | Monday 06 April 2026 02:53:52 +0000 (0:00:00.947) 0:00:54.872 ********** 2026-04-06 02:54:00.017033 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:54:00.017052 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:54:00.017071 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:54:00.017084 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:54:00.017094 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:54:00.017105 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:54:00.017116 | orchestrator | 2026-04-06 02:54:00.017127 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 02:54:00.017138 | orchestrator | Monday 06 April 2026 02:53:53 +0000 (0:00:01.071) 0:00:55.944 ********** 2026-04-06 02:54:00.017149 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:54:00.017160 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:54:00.017171 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:54:00.017182 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:54:00.017192 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:54:00.017203 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:54:00.017214 | orchestrator | 2026-04-06 02:54:00.017225 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 02:54:00.017236 | orchestrator | Monday 06 April 2026 02:53:54 +0000 (0:00:01.368) 0:00:57.312 ********** 2026-04-06 02:54:00.017247 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:54:00.017258 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:54:00.017268 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:54:00.017279 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:54:00.017291 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:54:00.017310 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:54:00.017328 | orchestrator | 2026-04-06 02:54:00.017762 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 02:54:00.017778 | orchestrator | Monday 06 April 2026 02:53:55 +0000 (0:00:00.665) 0:00:57.978 ********** 2026-04-06 02:54:00.017789 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:54:00.017800 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:54:00.017811 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:54:00.017822 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:54:00.017833 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:54:00.017844 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:54:00.017855 | orchestrator | 2026-04-06 02:54:00.017866 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 02:54:00.017877 | orchestrator | Monday 06 April 2026 02:53:56 +0000 (0:00:00.936) 0:00:58.915 ********** 2026-04-06 02:54:00.017888 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:54:00.017917 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:54:00.017933 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:54:00.017949 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:54:00.017964 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:54:00.017991 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:54:00.018012 | orchestrator | 2026-04-06 02:54:00.018126 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 02:54:00.018146 | orchestrator | Monday 06 April 2026 02:53:57 +0000 (0:00:00.632) 0:00:59.547 ********** 2026-04-06 02:54:00.018165 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:54:00.018185 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:54:00.018205 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:54:00.018225 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:54:00.018246 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:54:00.018265 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:54:00.018286 | orchestrator | 2026-04-06 02:54:00.018305 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 02:54:00.018318 | orchestrator | Monday 06 April 2026 02:53:58 +0000 (0:00:00.936) 0:01:00.484 ********** 2026-04-06 02:54:00.018329 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:54:00.018340 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:54:00.018414 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:54:00.018427 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:54:00.018439 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:54:00.018462 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:54:00.018473 | orchestrator | 2026-04-06 02:54:00.018485 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 02:54:00.018496 | orchestrator | Monday 06 April 2026 02:53:58 +0000 (0:00:00.687) 0:01:01.171 ********** 2026-04-06 02:54:00.018507 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:54:00.018518 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:54:00.018529 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:54:00.018540 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:54:00.018551 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:54:00.018562 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:54:00.018574 | orchestrator | 2026-04-06 02:54:00.018594 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 02:54:00.018613 | orchestrator | Monday 06 April 2026 02:53:59 +0000 (0:00:00.920) 0:01:02.092 ********** 2026-04-06 02:54:00.018632 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:54:00.018669 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:21.111243 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:21.111369 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:21.111385 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:21.111420 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:21.111431 | orchestrator | 2026-04-06 02:55:21.111442 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 02:55:21.111454 | orchestrator | Monday 06 April 2026 02:54:00 +0000 (0:00:00.675) 0:01:02.768 ********** 2026-04-06 02:55:21.111464 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:21.111474 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:21.111484 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:21.111495 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:55:21.111505 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:55:21.111515 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:55:21.111524 | orchestrator | 2026-04-06 02:55:21.111534 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 02:55:21.111544 | orchestrator | Monday 06 April 2026 02:54:01 +0000 (0:00:00.911) 0:01:03.679 ********** 2026-04-06 02:55:21.111554 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:55:21.111564 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:55:21.111574 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:55:21.111583 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:55:21.111592 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:55:21.111602 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:55:21.111637 | orchestrator | 2026-04-06 02:55:21.111648 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 02:55:21.111657 | orchestrator | Monday 06 April 2026 02:54:02 +0000 (0:00:00.738) 0:01:04.418 ********** 2026-04-06 02:55:21.111667 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:55:21.111676 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:55:21.111686 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:55:21.111695 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:55:21.111705 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:55:21.111715 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:55:21.111725 | orchestrator | 2026-04-06 02:55:21.111735 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 02:55:21.111747 | orchestrator | Monday 06 April 2026 02:54:03 +0000 (0:00:01.778) 0:01:06.196 ********** 2026-04-06 02:55:21.111759 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:55:21.111770 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:55:21.111781 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:55:21.111792 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:55:21.111802 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:55:21.111813 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:55:21.111824 | orchestrator | 2026-04-06 02:55:21.111835 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 02:55:21.111846 | orchestrator | Monday 06 April 2026 02:54:05 +0000 (0:00:01.844) 0:01:08.041 ********** 2026-04-06 02:55:21.111858 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:55:21.111869 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:55:21.111880 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:55:21.111891 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:55:21.111902 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:55:21.111913 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:55:21.111924 | orchestrator | 2026-04-06 02:55:21.111934 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 02:55:21.111946 | orchestrator | Monday 06 April 2026 02:54:08 +0000 (0:00:02.586) 0:01:10.627 ********** 2026-04-06 02:55:21.111958 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:55:21.111971 | orchestrator | 2026-04-06 02:55:21.111983 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 02:55:21.111994 | orchestrator | Monday 06 April 2026 02:54:09 +0000 (0:00:01.351) 0:01:11.979 ********** 2026-04-06 02:55:21.112005 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:21.112016 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:21.112027 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:21.112038 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:21.112048 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:21.112060 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:21.112069 | orchestrator | 2026-04-06 02:55:21.112079 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 02:55:21.112088 | orchestrator | Monday 06 April 2026 02:54:10 +0000 (0:00:00.776) 0:01:12.756 ********** 2026-04-06 02:55:21.112098 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:21.112108 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:21.112117 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:21.112127 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:21.112136 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:21.112146 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:21.112155 | orchestrator | 2026-04-06 02:55:21.112165 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 02:55:21.112175 | orchestrator | Monday 06 April 2026 02:54:11 +0000 (0:00:00.898) 0:01:13.654 ********** 2026-04-06 02:55:21.112184 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 02:55:21.112209 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 02:55:21.112230 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 02:55:21.112240 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 02:55:21.112249 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 02:55:21.112259 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 02:55:21.112269 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 02:55:21.112279 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 02:55:21.112289 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 02:55:21.112317 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 02:55:21.112328 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 02:55:21.112338 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 02:55:21.112347 | orchestrator | 2026-04-06 02:55:21.112357 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 02:55:21.112367 | orchestrator | Monday 06 April 2026 02:54:12 +0000 (0:00:01.355) 0:01:15.010 ********** 2026-04-06 02:55:21.112376 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:55:21.112386 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:55:21.112410 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:55:21.112420 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:55:21.112429 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:55:21.112439 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:55:21.112449 | orchestrator | 2026-04-06 02:55:21.112459 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 02:55:21.112469 | orchestrator | Monday 06 April 2026 02:54:13 +0000 (0:00:01.233) 0:01:16.243 ********** 2026-04-06 02:55:21.112478 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:21.112488 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:21.112497 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:21.112507 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:21.112516 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:21.112526 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:21.112535 | orchestrator | 2026-04-06 02:55:21.112545 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 02:55:21.112554 | orchestrator | Monday 06 April 2026 02:54:14 +0000 (0:00:00.740) 0:01:16.984 ********** 2026-04-06 02:55:21.112564 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:21.112573 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:21.112583 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:21.112592 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:21.112602 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:21.112611 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:21.112621 | orchestrator | 2026-04-06 02:55:21.112631 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 02:55:21.112640 | orchestrator | Monday 06 April 2026 02:54:15 +0000 (0:00:00.938) 0:01:17.923 ********** 2026-04-06 02:55:21.112650 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:21.112660 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:21.112669 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:21.112679 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:21.112688 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:21.112698 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:21.112707 | orchestrator | 2026-04-06 02:55:21.112717 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 02:55:21.112727 | orchestrator | Monday 06 April 2026 02:54:16 +0000 (0:00:00.706) 0:01:18.629 ********** 2026-04-06 02:55:21.112744 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:55:21.112754 | orchestrator | 2026-04-06 02:55:21.112763 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 02:55:21.112773 | orchestrator | Monday 06 April 2026 02:54:17 +0000 (0:00:01.383) 0:01:20.013 ********** 2026-04-06 02:55:21.112783 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:55:21.112793 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:55:21.112802 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:55:21.112812 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:55:21.112821 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:55:21.112831 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:55:21.112840 | orchestrator | 2026-04-06 02:55:21.112850 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 02:55:21.112860 | orchestrator | Monday 06 April 2026 02:55:20 +0000 (0:01:02.697) 0:02:22.710 ********** 2026-04-06 02:55:21.112870 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 02:55:21.112880 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 02:55:21.112889 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 02:55:21.112899 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:21.112909 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 02:55:21.112918 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 02:55:21.112928 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 02:55:21.112938 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:21.112948 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 02:55:21.112957 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 02:55:21.112973 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 02:55:21.112983 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:21.112993 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 02:55:21.113002 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 02:55:21.113012 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 02:55:21.113022 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:21.113031 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 02:55:21.113041 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 02:55:21.113051 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 02:55:21.113066 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.486986 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 02:55:46.487129 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 02:55:46.487158 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 02:55:46.487178 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.487199 | orchestrator | 2026-04-06 02:55:46.487219 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 02:55:46.487237 | orchestrator | Monday 06 April 2026 02:55:21 +0000 (0:00:00.793) 0:02:23.504 ********** 2026-04-06 02:55:46.487255 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.487272 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.487289 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.487306 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.487324 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.487374 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.487394 | orchestrator | 2026-04-06 02:55:46.487463 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 02:55:46.487487 | orchestrator | Monday 06 April 2026 02:55:22 +0000 (0:00:00.921) 0:02:24.425 ********** 2026-04-06 02:55:46.487506 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.487525 | orchestrator | 2026-04-06 02:55:46.487543 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 02:55:46.487563 | orchestrator | Monday 06 April 2026 02:55:22 +0000 (0:00:00.153) 0:02:24.578 ********** 2026-04-06 02:55:46.487581 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.487600 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.487620 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.487638 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.487656 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.487675 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.487727 | orchestrator | 2026-04-06 02:55:46.487750 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 02:55:46.487769 | orchestrator | Monday 06 April 2026 02:55:22 +0000 (0:00:00.710) 0:02:25.289 ********** 2026-04-06 02:55:46.487787 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.487806 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.487825 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.487844 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.487863 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.487881 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.487892 | orchestrator | 2026-04-06 02:55:46.487904 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 02:55:46.487921 | orchestrator | Monday 06 April 2026 02:55:23 +0000 (0:00:00.960) 0:02:26.250 ********** 2026-04-06 02:55:46.487938 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.487956 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.487973 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.487992 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.488009 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.488026 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.488044 | orchestrator | 2026-04-06 02:55:46.488062 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 02:55:46.488100 | orchestrator | Monday 06 April 2026 02:55:24 +0000 (0:00:00.691) 0:02:26.941 ********** 2026-04-06 02:55:46.488118 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:55:46.488143 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:55:46.488154 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:55:46.488164 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:55:46.488176 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:55:46.488195 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:55:46.488215 | orchestrator | 2026-04-06 02:55:46.488237 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 02:55:46.488256 | orchestrator | Monday 06 April 2026 02:55:28 +0000 (0:00:03.573) 0:02:30.515 ********** 2026-04-06 02:55:46.488269 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:55:46.488279 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:55:46.488290 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:55:46.488301 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:55:46.488312 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:55:46.488322 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:55:46.488333 | orchestrator | 2026-04-06 02:55:46.488344 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 02:55:46.488355 | orchestrator | Monday 06 April 2026 02:55:28 +0000 (0:00:00.662) 0:02:31.177 ********** 2026-04-06 02:55:46.488367 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:55:46.488380 | orchestrator | 2026-04-06 02:55:46.488391 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 02:55:46.488505 | orchestrator | Monday 06 April 2026 02:55:30 +0000 (0:00:01.749) 0:02:32.927 ********** 2026-04-06 02:55:46.488526 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.488557 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.488577 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.488614 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.488631 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.488647 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.488662 | orchestrator | 2026-04-06 02:55:46.488678 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 02:55:46.488696 | orchestrator | Monday 06 April 2026 02:55:31 +0000 (0:00:00.932) 0:02:33.860 ********** 2026-04-06 02:55:46.488714 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.488730 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.488748 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.488766 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.488782 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.488798 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.488814 | orchestrator | 2026-04-06 02:55:46.488831 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 02:55:46.488848 | orchestrator | Monday 06 April 2026 02:55:32 +0000 (0:00:00.650) 0:02:34.510 ********** 2026-04-06 02:55:46.488864 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.488909 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.488926 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.488942 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.488959 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.488976 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.488991 | orchestrator | 2026-04-06 02:55:46.489006 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 02:55:46.489022 | orchestrator | Monday 06 April 2026 02:55:33 +0000 (0:00:00.960) 0:02:35.471 ********** 2026-04-06 02:55:46.489038 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.489054 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.489071 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.489087 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.489103 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.489118 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.489129 | orchestrator | 2026-04-06 02:55:46.489138 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 02:55:46.489148 | orchestrator | Monday 06 April 2026 02:55:33 +0000 (0:00:00.651) 0:02:36.122 ********** 2026-04-06 02:55:46.489158 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.489167 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.489177 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.489186 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.489196 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.489205 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.489215 | orchestrator | 2026-04-06 02:55:46.489224 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 02:55:46.489234 | orchestrator | Monday 06 April 2026 02:55:34 +0000 (0:00:00.941) 0:02:37.063 ********** 2026-04-06 02:55:46.489243 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.489253 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.489262 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.489271 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.489281 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.489290 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.489300 | orchestrator | 2026-04-06 02:55:46.489309 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 02:55:46.489319 | orchestrator | Monday 06 April 2026 02:55:35 +0000 (0:00:00.683) 0:02:37.747 ********** 2026-04-06 02:55:46.489357 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.489382 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.489398 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.489442 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.489459 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.489473 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.489486 | orchestrator | 2026-04-06 02:55:46.489501 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 02:55:46.489516 | orchestrator | Monday 06 April 2026 02:55:36 +0000 (0:00:00.941) 0:02:38.688 ********** 2026-04-06 02:55:46.489531 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:46.489547 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:46.489562 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:46.489576 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:46.489589 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:46.489604 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:46.489618 | orchestrator | 2026-04-06 02:55:46.489632 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 02:55:46.489647 | orchestrator | Monday 06 April 2026 02:55:36 +0000 (0:00:00.710) 0:02:39.399 ********** 2026-04-06 02:55:46.489662 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:55:46.489677 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:55:46.489693 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:55:46.489708 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:55:46.489723 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:55:46.489739 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:55:46.489753 | orchestrator | 2026-04-06 02:55:46.489769 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 02:55:46.489784 | orchestrator | Monday 06 April 2026 02:55:38 +0000 (0:00:01.443) 0:02:40.842 ********** 2026-04-06 02:55:46.489801 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:55:46.489817 | orchestrator | 2026-04-06 02:55:46.489833 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 02:55:46.489850 | orchestrator | Monday 06 April 2026 02:55:39 +0000 (0:00:01.427) 0:02:42.270 ********** 2026-04-06 02:55:46.489867 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-06 02:55:46.489884 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-06 02:55:46.489900 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-06 02:55:46.489914 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-06 02:55:46.489930 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-06 02:55:46.489946 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-06 02:55:46.489962 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-06 02:55:46.489992 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-06 02:55:46.490002 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-06 02:55:46.490012 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-06 02:55:46.490119 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-06 02:55:46.490138 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-06 02:55:46.490153 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-06 02:55:46.490169 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-06 02:55:46.490186 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-06 02:55:46.490240 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-06 02:55:46.490252 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-06 02:55:46.490278 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-06 02:55:52.137563 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-06 02:55:52.137697 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-06 02:55:52.137717 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-06 02:55:52.137728 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-06 02:55:52.137739 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-06 02:55:52.137751 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-06 02:55:52.137761 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-06 02:55:52.137772 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-06 02:55:52.137782 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-06 02:55:52.137792 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-06 02:55:52.137802 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-06 02:55:52.137813 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-06 02:55:52.137823 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-06 02:55:52.137834 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-06 02:55:52.137844 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-06 02:55:52.137855 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-06 02:55:52.137865 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-06 02:55:52.137875 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-06 02:55:52.137885 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-06 02:55:52.137896 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-06 02:55:52.137917 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-06 02:55:52.137927 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-06 02:55:52.137933 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 02:55:52.137940 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-06 02:55:52.137946 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-06 02:55:52.137952 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-06 02:55:52.137963 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-06 02:55:52.137972 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-06 02:55:52.137984 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 02:55:52.137993 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 02:55:52.138003 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 02:55:52.138013 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-06 02:55:52.138083 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-06 02:55:52.138092 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-06 02:55:52.138100 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 02:55:52.138107 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 02:55:52.138114 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 02:55:52.138122 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 02:55:52.138130 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 02:55:52.138138 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 02:55:52.138145 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 02:55:52.138152 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 02:55:52.138159 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 02:55:52.138176 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 02:55:52.138184 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 02:55:52.138191 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 02:55:52.138198 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 02:55:52.138206 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 02:55:52.138228 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 02:55:52.138236 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 02:55:52.138243 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 02:55:52.138250 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 02:55:52.138257 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 02:55:52.138264 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 02:55:52.138271 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 02:55:52.138278 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 02:55:52.138285 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 02:55:52.138292 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 02:55:52.138317 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 02:55:52.138325 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 02:55:52.138333 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 02:55:52.138341 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-06 02:55:52.138348 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 02:55:52.138356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 02:55:52.138363 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 02:55:52.138370 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-06 02:55:52.138378 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-06 02:55:52.138385 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-06 02:55:52.138392 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 02:55:52.138400 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 02:55:52.138407 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-06 02:55:52.138488 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-06 02:55:52.138497 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-06 02:55:52.138505 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-06 02:55:52.138512 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-06 02:55:52.138520 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-06 02:55:52.138526 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-06 02:55:52.138532 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-06 02:55:52.138539 | orchestrator | 2026-04-06 02:55:52.138546 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 02:55:52.138552 | orchestrator | Monday 06 April 2026 02:55:46 +0000 (0:00:06.598) 0:02:48.869 ********** 2026-04-06 02:55:52.138559 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:52.138565 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:52.138571 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:52.138578 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:55:52.138595 | orchestrator | 2026-04-06 02:55:52.138601 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-06 02:55:52.138608 | orchestrator | Monday 06 April 2026 02:55:47 +0000 (0:00:01.159) 0:02:50.028 ********** 2026-04-06 02:55:52.138614 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 02:55:52.138621 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 02:55:52.138628 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 02:55:52.138634 | orchestrator | 2026-04-06 02:55:52.138640 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-06 02:55:52.138646 | orchestrator | Monday 06 April 2026 02:55:48 +0000 (0:00:00.729) 0:02:50.758 ********** 2026-04-06 02:55:52.138653 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 02:55:52.138659 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 02:55:52.138665 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 02:55:52.138671 | orchestrator | 2026-04-06 02:55:52.138678 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 02:55:52.138684 | orchestrator | Monday 06 April 2026 02:55:49 +0000 (0:00:01.204) 0:02:51.963 ********** 2026-04-06 02:55:52.138690 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:55:52.138697 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:55:52.138706 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:55:52.138717 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:52.138726 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:52.138736 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:52.138747 | orchestrator | 2026-04-06 02:55:52.138758 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 02:55:52.138776 | orchestrator | Monday 06 April 2026 02:55:50 +0000 (0:00:00.939) 0:02:52.902 ********** 2026-04-06 02:55:52.138784 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:55:52.138790 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:55:52.138796 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:55:52.138802 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:52.138808 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:52.138814 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:52.138821 | orchestrator | 2026-04-06 02:55:52.138827 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 02:55:52.138833 | orchestrator | Monday 06 April 2026 02:55:51 +0000 (0:00:00.699) 0:02:53.602 ********** 2026-04-06 02:55:52.138839 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:55:52.138845 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:55:52.138854 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:55:52.138864 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:55:52.138874 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:55:52.138884 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:55:52.138894 | orchestrator | 2026-04-06 02:55:52.138913 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 02:56:06.392278 | orchestrator | Monday 06 April 2026 02:55:52 +0000 (0:00:00.928) 0:02:54.530 ********** 2026-04-06 02:56:06.392357 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.392366 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.392371 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.392376 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392381 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392399 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392405 | orchestrator | 2026-04-06 02:56:06.392410 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 02:56:06.392416 | orchestrator | Monday 06 April 2026 02:55:52 +0000 (0:00:00.651) 0:02:55.182 ********** 2026-04-06 02:56:06.392420 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.392459 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.392464 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.392469 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392474 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392478 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392483 | orchestrator | 2026-04-06 02:56:06.392488 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 02:56:06.392494 | orchestrator | Monday 06 April 2026 02:55:53 +0000 (0:00:00.909) 0:02:56.092 ********** 2026-04-06 02:56:06.392498 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.392503 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.392508 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.392512 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392517 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392522 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392526 | orchestrator | 2026-04-06 02:56:06.392531 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 02:56:06.392536 | orchestrator | Monday 06 April 2026 02:55:54 +0000 (0:00:00.663) 0:02:56.755 ********** 2026-04-06 02:56:06.392541 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.392545 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.392550 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.392554 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392559 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392563 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392568 | orchestrator | 2026-04-06 02:56:06.392573 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 02:56:06.392578 | orchestrator | Monday 06 April 2026 02:55:55 +0000 (0:00:00.924) 0:02:57.680 ********** 2026-04-06 02:56:06.392582 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.392587 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.392591 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.392596 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392600 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392605 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392610 | orchestrator | 2026-04-06 02:56:06.392615 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 02:56:06.392619 | orchestrator | Monday 06 April 2026 02:55:55 +0000 (0:00:00.647) 0:02:58.327 ********** 2026-04-06 02:56:06.392624 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392629 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392634 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392638 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:56:06.392644 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:56:06.392648 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:56:06.392653 | orchestrator | 2026-04-06 02:56:06.392658 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 02:56:06.392662 | orchestrator | Monday 06 April 2026 02:55:58 +0000 (0:00:02.939) 0:03:01.266 ********** 2026-04-06 02:56:06.392667 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:56:06.392671 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:56:06.392676 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:56:06.392681 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392685 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392690 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392694 | orchestrator | 2026-04-06 02:56:06.392699 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 02:56:06.392709 | orchestrator | Monday 06 April 2026 02:55:59 +0000 (0:00:00.672) 0:03:01.938 ********** 2026-04-06 02:56:06.392713 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:56:06.392718 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:56:06.392722 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:56:06.392727 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392732 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392736 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392741 | orchestrator | 2026-04-06 02:56:06.392745 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 02:56:06.392750 | orchestrator | Monday 06 April 2026 02:56:00 +0000 (0:00:00.980) 0:03:02.919 ********** 2026-04-06 02:56:06.392755 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.392759 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.392769 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.392774 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392779 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392783 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392788 | orchestrator | 2026-04-06 02:56:06.392793 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 02:56:06.392797 | orchestrator | Monday 06 April 2026 02:56:01 +0000 (0:00:00.668) 0:03:03.588 ********** 2026-04-06 02:56:06.392802 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 02:56:06.392810 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 02:56:06.392814 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 02:56:06.392819 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392834 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392840 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392845 | orchestrator | 2026-04-06 02:56:06.392851 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 02:56:06.392856 | orchestrator | Monday 06 April 2026 02:56:02 +0000 (0:00:00.972) 0:03:04.561 ********** 2026-04-06 02:56:06.392863 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-06 02:56:06.392871 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-06 02:56:06.392877 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.392883 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-06 02:56:06.392888 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-06 02:56:06.392894 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-06 02:56:06.392904 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-06 02:56:06.392909 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.392915 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.392920 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392926 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392931 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392936 | orchestrator | 2026-04-06 02:56:06.392942 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 02:56:06.392947 | orchestrator | Monday 06 April 2026 02:56:02 +0000 (0:00:00.733) 0:03:05.294 ********** 2026-04-06 02:56:06.392953 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.392958 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.392964 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.392969 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.392975 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.392980 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.392985 | orchestrator | 2026-04-06 02:56:06.392991 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 02:56:06.392996 | orchestrator | Monday 06 April 2026 02:56:03 +0000 (0:00:00.920) 0:03:06.215 ********** 2026-04-06 02:56:06.393002 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.393007 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.393013 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.393018 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.393023 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.393029 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.393034 | orchestrator | 2026-04-06 02:56:06.393040 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 02:56:06.393048 | orchestrator | Monday 06 April 2026 02:56:04 +0000 (0:00:00.711) 0:03:06.927 ********** 2026-04-06 02:56:06.393054 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.393059 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.393065 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.393070 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.393075 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.393080 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.393085 | orchestrator | 2026-04-06 02:56:06.393091 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 02:56:06.393097 | orchestrator | Monday 06 April 2026 02:56:05 +0000 (0:00:00.963) 0:03:07.890 ********** 2026-04-06 02:56:06.393104 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:06.393111 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:06.393118 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:06.393130 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:06.393140 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:06.393147 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:06.393154 | orchestrator | 2026-04-06 02:56:06.393161 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 02:56:06.393174 | orchestrator | Monday 06 April 2026 02:56:06 +0000 (0:00:00.890) 0:03:08.780 ********** 2026-04-06 02:56:25.414892 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.415880 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:25.415926 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:25.415933 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:25.415938 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:25.415943 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:25.415964 | orchestrator | 2026-04-06 02:56:25.415971 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 02:56:25.415977 | orchestrator | Monday 06 April 2026 02:56:07 +0000 (0:00:00.722) 0:03:09.503 ********** 2026-04-06 02:56:25.415982 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:56:25.415987 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:56:25.415992 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:25.415997 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:56:25.416001 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:25.416006 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:25.416010 | orchestrator | 2026-04-06 02:56:25.416016 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 02:56:25.416021 | orchestrator | Monday 06 April 2026 02:56:08 +0000 (0:00:00.924) 0:03:10.427 ********** 2026-04-06 02:56:25.416025 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:56:25.416030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:56:25.416035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:56:25.416040 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416045 | orchestrator | 2026-04-06 02:56:25.416049 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 02:56:25.416054 | orchestrator | Monday 06 April 2026 02:56:08 +0000 (0:00:00.471) 0:03:10.898 ********** 2026-04-06 02:56:25.416058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:56:25.416063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:56:25.416068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:56:25.416072 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416077 | orchestrator | 2026-04-06 02:56:25.416081 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 02:56:25.416086 | orchestrator | Monday 06 April 2026 02:56:08 +0000 (0:00:00.484) 0:03:11.383 ********** 2026-04-06 02:56:25.416090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:56:25.416095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:56:25.416099 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:56:25.416104 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416108 | orchestrator | 2026-04-06 02:56:25.416113 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 02:56:25.416117 | orchestrator | Monday 06 April 2026 02:56:09 +0000 (0:00:00.533) 0:03:11.916 ********** 2026-04-06 02:56:25.416122 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:56:25.416126 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:56:25.416131 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:56:25.416135 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:25.416140 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:25.416144 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:25.416149 | orchestrator | 2026-04-06 02:56:25.416153 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 02:56:25.416158 | orchestrator | Monday 06 April 2026 02:56:10 +0000 (0:00:00.680) 0:03:12.597 ********** 2026-04-06 02:56:25.416163 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-06 02:56:25.416167 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 02:56:25.416172 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-06 02:56:25.416177 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-06 02:56:25.416181 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:25.416186 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-06 02:56:25.416190 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:25.416195 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-06 02:56:25.416199 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:25.416204 | orchestrator | 2026-04-06 02:56:25.416209 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 02:56:25.416217 | orchestrator | Monday 06 April 2026 02:56:12 +0000 (0:00:01.929) 0:03:14.526 ********** 2026-04-06 02:56:25.416222 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:56:25.416227 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:56:25.416231 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:56:25.416236 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:56:25.416240 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:56:25.416245 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:56:25.416249 | orchestrator | 2026-04-06 02:56:25.416254 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 02:56:25.416258 | orchestrator | Monday 06 April 2026 02:56:15 +0000 (0:00:02.897) 0:03:17.424 ********** 2026-04-06 02:56:25.416263 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:56:25.416279 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:56:25.416284 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:56:25.416288 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:56:25.416293 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:56:25.416298 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:56:25.416302 | orchestrator | 2026-04-06 02:56:25.416307 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-06 02:56:25.416311 | orchestrator | Monday 06 April 2026 02:56:16 +0000 (0:00:01.105) 0:03:18.530 ********** 2026-04-06 02:56:25.416316 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416320 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:25.416325 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:25.416330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:56:25.416335 | orchestrator | 2026-04-06 02:56:25.416339 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-06 02:56:25.416344 | orchestrator | Monday 06 April 2026 02:56:17 +0000 (0:00:01.211) 0:03:19.742 ********** 2026-04-06 02:56:25.416348 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:56:25.416370 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:56:25.416375 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:56:25.416380 | orchestrator | 2026-04-06 02:56:25.416384 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-06 02:56:25.416389 | orchestrator | Monday 06 April 2026 02:56:17 +0000 (0:00:00.360) 0:03:20.102 ********** 2026-04-06 02:56:25.416393 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:56:25.416398 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:56:25.416402 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:56:25.416407 | orchestrator | 2026-04-06 02:56:25.416412 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-06 02:56:25.416416 | orchestrator | Monday 06 April 2026 02:56:19 +0000 (0:00:01.534) 0:03:21.637 ********** 2026-04-06 02:56:25.416421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 02:56:25.416425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 02:56:25.416430 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 02:56:25.416450 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:25.416458 | orchestrator | 2026-04-06 02:56:25.416465 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-06 02:56:25.416472 | orchestrator | Monday 06 April 2026 02:56:19 +0000 (0:00:00.755) 0:03:22.392 ********** 2026-04-06 02:56:25.416479 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:56:25.416486 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:56:25.416494 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:56:25.416502 | orchestrator | 2026-04-06 02:56:25.416509 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-06 02:56:25.416517 | orchestrator | Monday 06 April 2026 02:56:20 +0000 (0:00:00.361) 0:03:22.754 ********** 2026-04-06 02:56:25.416522 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:25.416527 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:25.416531 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:25.416540 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:56:25.416545 | orchestrator | 2026-04-06 02:56:25.416550 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-06 02:56:25.416554 | orchestrator | Monday 06 April 2026 02:56:21 +0000 (0:00:01.159) 0:03:23.913 ********** 2026-04-06 02:56:25.416559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:56:25.416563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:56:25.416568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:56:25.416572 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416577 | orchestrator | 2026-04-06 02:56:25.416582 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-06 02:56:25.416586 | orchestrator | Monday 06 April 2026 02:56:21 +0000 (0:00:00.450) 0:03:24.364 ********** 2026-04-06 02:56:25.416591 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416595 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:25.416600 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:25.416605 | orchestrator | 2026-04-06 02:56:25.416609 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-06 02:56:25.416614 | orchestrator | Monday 06 April 2026 02:56:22 +0000 (0:00:00.424) 0:03:24.788 ********** 2026-04-06 02:56:25.416619 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416627 | orchestrator | 2026-04-06 02:56:25.416636 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-06 02:56:25.416646 | orchestrator | Monday 06 April 2026 02:56:22 +0000 (0:00:00.256) 0:03:25.044 ********** 2026-04-06 02:56:25.416654 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416661 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:25.416668 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:25.416675 | orchestrator | 2026-04-06 02:56:25.416682 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-06 02:56:25.416689 | orchestrator | Monday 06 April 2026 02:56:22 +0000 (0:00:00.349) 0:03:25.394 ********** 2026-04-06 02:56:25.416695 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416702 | orchestrator | 2026-04-06 02:56:25.416709 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-06 02:56:25.416716 | orchestrator | Monday 06 April 2026 02:56:23 +0000 (0:00:00.777) 0:03:26.171 ********** 2026-04-06 02:56:25.416723 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416731 | orchestrator | 2026-04-06 02:56:25.416737 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-06 02:56:25.416744 | orchestrator | Monday 06 April 2026 02:56:24 +0000 (0:00:00.266) 0:03:26.438 ********** 2026-04-06 02:56:25.416751 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416757 | orchestrator | 2026-04-06 02:56:25.416765 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-06 02:56:25.416772 | orchestrator | Monday 06 April 2026 02:56:24 +0000 (0:00:00.162) 0:03:26.600 ********** 2026-04-06 02:56:25.416784 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416792 | orchestrator | 2026-04-06 02:56:25.416799 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-06 02:56:25.416805 | orchestrator | Monday 06 April 2026 02:56:24 +0000 (0:00:00.273) 0:03:26.873 ********** 2026-04-06 02:56:25.416812 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416819 | orchestrator | 2026-04-06 02:56:25.416827 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-06 02:56:25.416835 | orchestrator | Monday 06 April 2026 02:56:24 +0000 (0:00:00.269) 0:03:27.142 ********** 2026-04-06 02:56:25.416843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:56:25.416850 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:56:25.416857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:56:25.416871 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:25.416876 | orchestrator | 2026-04-06 02:56:25.416881 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-06 02:56:25.416886 | orchestrator | Monday 06 April 2026 02:56:25 +0000 (0:00:00.461) 0:03:27.604 ********** 2026-04-06 02:56:25.416896 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:45.230229 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:45.230341 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:45.230356 | orchestrator | 2026-04-06 02:56:45.230367 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-06 02:56:45.230379 | orchestrator | Monday 06 April 2026 02:56:25 +0000 (0:00:00.342) 0:03:27.946 ********** 2026-04-06 02:56:45.230389 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:45.230399 | orchestrator | 2026-04-06 02:56:45.230409 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-06 02:56:45.230419 | orchestrator | Monday 06 April 2026 02:56:25 +0000 (0:00:00.240) 0:03:28.187 ********** 2026-04-06 02:56:45.230428 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:45.230438 | orchestrator | 2026-04-06 02:56:45.230468 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-06 02:56:45.230479 | orchestrator | Monday 06 April 2026 02:56:26 +0000 (0:00:00.243) 0:03:28.431 ********** 2026-04-06 02:56:45.230488 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:45.230498 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:45.230508 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:45.230519 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:56:45.230529 | orchestrator | 2026-04-06 02:56:45.230539 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-06 02:56:45.230548 | orchestrator | Monday 06 April 2026 02:56:27 +0000 (0:00:01.176) 0:03:29.607 ********** 2026-04-06 02:56:45.230558 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:56:45.230570 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:56:45.230580 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:56:45.230590 | orchestrator | 2026-04-06 02:56:45.230599 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-06 02:56:45.230609 | orchestrator | Monday 06 April 2026 02:56:27 +0000 (0:00:00.392) 0:03:29.999 ********** 2026-04-06 02:56:45.230619 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:56:45.230629 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:56:45.230639 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:56:45.230649 | orchestrator | 2026-04-06 02:56:45.230658 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-06 02:56:45.230668 | orchestrator | Monday 06 April 2026 02:56:29 +0000 (0:00:01.428) 0:03:31.428 ********** 2026-04-06 02:56:45.230678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:56:45.230688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:56:45.230698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:56:45.230708 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:45.230717 | orchestrator | 2026-04-06 02:56:45.230727 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-06 02:56:45.230737 | orchestrator | Monday 06 April 2026 02:56:29 +0000 (0:00:00.710) 0:03:32.138 ********** 2026-04-06 02:56:45.230746 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:56:45.230756 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:56:45.230766 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:56:45.230776 | orchestrator | 2026-04-06 02:56:45.230786 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-06 02:56:45.230795 | orchestrator | Monday 06 April 2026 02:56:30 +0000 (0:00:00.351) 0:03:32.489 ********** 2026-04-06 02:56:45.230805 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:45.230815 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:45.230825 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:45.230855 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:56:45.230866 | orchestrator | 2026-04-06 02:56:45.230876 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-06 02:56:45.230886 | orchestrator | Monday 06 April 2026 02:56:31 +0000 (0:00:01.162) 0:03:33.652 ********** 2026-04-06 02:56:45.230895 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:56:45.230905 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:56:45.230914 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:56:45.230924 | orchestrator | 2026-04-06 02:56:45.230934 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-06 02:56:45.230944 | orchestrator | Monday 06 April 2026 02:56:31 +0000 (0:00:00.384) 0:03:34.036 ********** 2026-04-06 02:56:45.230953 | orchestrator | changed: [testbed-node-3] 2026-04-06 02:56:45.230963 | orchestrator | changed: [testbed-node-4] 2026-04-06 02:56:45.230973 | orchestrator | changed: [testbed-node-5] 2026-04-06 02:56:45.230982 | orchestrator | 2026-04-06 02:56:45.230992 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-06 02:56:45.231002 | orchestrator | Monday 06 April 2026 02:56:32 +0000 (0:00:01.160) 0:03:35.197 ********** 2026-04-06 02:56:45.231011 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 02:56:45.231033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 02:56:45.231043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 02:56:45.231053 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:45.231063 | orchestrator | 2026-04-06 02:56:45.231072 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-06 02:56:45.231082 | orchestrator | Monday 06 April 2026 02:56:33 +0000 (0:00:00.933) 0:03:36.130 ********** 2026-04-06 02:56:45.231092 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:56:45.231101 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:56:45.231111 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:56:45.231121 | orchestrator | 2026-04-06 02:56:45.231130 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-06 02:56:45.231140 | orchestrator | Monday 06 April 2026 02:56:34 +0000 (0:00:00.617) 0:03:36.748 ********** 2026-04-06 02:56:45.231150 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:45.231159 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:45.231173 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:45.231189 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:45.231206 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:45.231216 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:45.231225 | orchestrator | 2026-04-06 02:56:45.231252 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-06 02:56:45.231263 | orchestrator | Monday 06 April 2026 02:56:35 +0000 (0:00:00.787) 0:03:37.536 ********** 2026-04-06 02:56:45.231272 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:56:45.231282 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:56:45.231292 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:56:45.231301 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:56:45.231311 | orchestrator | 2026-04-06 02:56:45.231321 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-06 02:56:45.231331 | orchestrator | Monday 06 April 2026 02:56:36 +0000 (0:00:01.161) 0:03:38.698 ********** 2026-04-06 02:56:45.231340 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:56:45.231350 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:56:45.231359 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:56:45.231369 | orchestrator | 2026-04-06 02:56:45.231378 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-06 02:56:45.231388 | orchestrator | Monday 06 April 2026 02:56:36 +0000 (0:00:00.365) 0:03:39.064 ********** 2026-04-06 02:56:45.231398 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:56:45.231415 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:56:45.231425 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:56:45.231435 | orchestrator | 2026-04-06 02:56:45.231444 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-06 02:56:45.231512 | orchestrator | Monday 06 April 2026 02:56:37 +0000 (0:00:01.196) 0:03:40.261 ********** 2026-04-06 02:56:45.231523 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 02:56:45.231533 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 02:56:45.231543 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 02:56:45.231553 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:45.231563 | orchestrator | 2026-04-06 02:56:45.231573 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-06 02:56:45.231583 | orchestrator | Monday 06 April 2026 02:56:39 +0000 (0:00:01.427) 0:03:41.688 ********** 2026-04-06 02:56:45.231593 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:56:45.231603 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:56:45.231613 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:56:45.231623 | orchestrator | 2026-04-06 02:56:45.231633 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-06 02:56:45.231643 | orchestrator | 2026-04-06 02:56:45.231653 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 02:56:45.231664 | orchestrator | Monday 06 April 2026 02:56:39 +0000 (0:00:00.667) 0:03:42.356 ********** 2026-04-06 02:56:45.231674 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:56:45.231686 | orchestrator | 2026-04-06 02:56:45.231696 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 02:56:45.231706 | orchestrator | Monday 06 April 2026 02:56:40 +0000 (0:00:00.817) 0:03:43.173 ********** 2026-04-06 02:56:45.231716 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:56:45.231726 | orchestrator | 2026-04-06 02:56:45.231737 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 02:56:45.231747 | orchestrator | Monday 06 April 2026 02:56:41 +0000 (0:00:00.603) 0:03:43.777 ********** 2026-04-06 02:56:45.231757 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:56:45.231767 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:56:45.231777 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:56:45.231787 | orchestrator | 2026-04-06 02:56:45.231797 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 02:56:45.231807 | orchestrator | Monday 06 April 2026 02:56:42 +0000 (0:00:00.717) 0:03:44.495 ********** 2026-04-06 02:56:45.231821 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:45.231837 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:45.231853 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:45.231871 | orchestrator | 2026-04-06 02:56:45.231896 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 02:56:45.231912 | orchestrator | Monday 06 April 2026 02:56:42 +0000 (0:00:00.623) 0:03:45.119 ********** 2026-04-06 02:56:45.231928 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:45.231944 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:45.231959 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:45.231974 | orchestrator | 2026-04-06 02:56:45.231991 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 02:56:45.232007 | orchestrator | Monday 06 April 2026 02:56:43 +0000 (0:00:00.381) 0:03:45.500 ********** 2026-04-06 02:56:45.232022 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:45.232036 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:45.232063 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:45.232079 | orchestrator | 2026-04-06 02:56:45.232095 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 02:56:45.232107 | orchestrator | Monday 06 April 2026 02:56:43 +0000 (0:00:00.342) 0:03:45.843 ********** 2026-04-06 02:56:45.232127 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:56:45.232139 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:56:45.232155 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:56:45.232167 | orchestrator | 2026-04-06 02:56:45.232177 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 02:56:45.232187 | orchestrator | Monday 06 April 2026 02:56:44 +0000 (0:00:00.746) 0:03:46.589 ********** 2026-04-06 02:56:45.232196 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:45.232206 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:45.232219 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:56:45.232240 | orchestrator | 2026-04-06 02:56:45.232263 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 02:56:45.232277 | orchestrator | Monday 06 April 2026 02:56:44 +0000 (0:00:00.660) 0:03:47.250 ********** 2026-04-06 02:56:45.232294 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:56:45.232311 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:56:45.232339 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:07.800263 | orchestrator | 2026-04-06 02:57:07.800358 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 02:57:07.800370 | orchestrator | Monday 06 April 2026 02:56:45 +0000 (0:00:00.373) 0:03:47.623 ********** 2026-04-06 02:57:07.800377 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.800386 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.800393 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.800401 | orchestrator | 2026-04-06 02:57:07.800408 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 02:57:07.800415 | orchestrator | Monday 06 April 2026 02:56:45 +0000 (0:00:00.765) 0:03:48.388 ********** 2026-04-06 02:57:07.800423 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.800430 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.800437 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.800444 | orchestrator | 2026-04-06 02:57:07.800451 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 02:57:07.800458 | orchestrator | Monday 06 April 2026 02:56:46 +0000 (0:00:00.778) 0:03:49.167 ********** 2026-04-06 02:57:07.800525 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:07.800534 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:07.800542 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:07.800550 | orchestrator | 2026-04-06 02:57:07.800558 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 02:57:07.800566 | orchestrator | Monday 06 April 2026 02:56:47 +0000 (0:00:00.639) 0:03:49.806 ********** 2026-04-06 02:57:07.800574 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.800582 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.800590 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.800598 | orchestrator | 2026-04-06 02:57:07.800606 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 02:57:07.800614 | orchestrator | Monday 06 April 2026 02:56:47 +0000 (0:00:00.403) 0:03:50.210 ********** 2026-04-06 02:57:07.800621 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:07.800629 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:07.800636 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:07.800643 | orchestrator | 2026-04-06 02:57:07.800651 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 02:57:07.800659 | orchestrator | Monday 06 April 2026 02:56:48 +0000 (0:00:00.354) 0:03:50.565 ********** 2026-04-06 02:57:07.800667 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:07.800675 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:07.800682 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:07.800690 | orchestrator | 2026-04-06 02:57:07.800698 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 02:57:07.800706 | orchestrator | Monday 06 April 2026 02:56:48 +0000 (0:00:00.360) 0:03:50.925 ********** 2026-04-06 02:57:07.800713 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:07.800744 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:07.800752 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:07.800758 | orchestrator | 2026-04-06 02:57:07.800766 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 02:57:07.800774 | orchestrator | Monday 06 April 2026 02:56:49 +0000 (0:00:00.666) 0:03:51.592 ********** 2026-04-06 02:57:07.800781 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:07.800789 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:07.800797 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:07.800804 | orchestrator | 2026-04-06 02:57:07.800812 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 02:57:07.800821 | orchestrator | Monday 06 April 2026 02:56:49 +0000 (0:00:00.382) 0:03:51.974 ********** 2026-04-06 02:57:07.800829 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:07.800837 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:07.800846 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:07.800854 | orchestrator | 2026-04-06 02:57:07.800862 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 02:57:07.800871 | orchestrator | Monday 06 April 2026 02:56:49 +0000 (0:00:00.350) 0:03:52.325 ********** 2026-04-06 02:57:07.800879 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.800887 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.800895 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.800904 | orchestrator | 2026-04-06 02:57:07.800912 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 02:57:07.800920 | orchestrator | Monday 06 April 2026 02:56:50 +0000 (0:00:00.362) 0:03:52.688 ********** 2026-04-06 02:57:07.800929 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.800937 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.800945 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.800953 | orchestrator | 2026-04-06 02:57:07.800962 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 02:57:07.800970 | orchestrator | Monday 06 April 2026 02:56:50 +0000 (0:00:00.684) 0:03:53.373 ********** 2026-04-06 02:57:07.800979 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.800988 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.800996 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.801005 | orchestrator | 2026-04-06 02:57:07.801026 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-06 02:57:07.801034 | orchestrator | Monday 06 April 2026 02:56:51 +0000 (0:00:00.638) 0:03:54.011 ********** 2026-04-06 02:57:07.801043 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.801051 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.801059 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.801068 | orchestrator | 2026-04-06 02:57:07.801076 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-06 02:57:07.801085 | orchestrator | Monday 06 April 2026 02:56:51 +0000 (0:00:00.367) 0:03:54.379 ********** 2026-04-06 02:57:07.801094 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:57:07.801103 | orchestrator | 2026-04-06 02:57:07.801111 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-06 02:57:07.801119 | orchestrator | Monday 06 April 2026 02:56:52 +0000 (0:00:00.925) 0:03:55.305 ********** 2026-04-06 02:57:07.801128 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:07.801137 | orchestrator | 2026-04-06 02:57:07.801145 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-06 02:57:07.801169 | orchestrator | Monday 06 April 2026 02:56:53 +0000 (0:00:00.189) 0:03:55.495 ********** 2026-04-06 02:57:07.801177 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-06 02:57:07.801185 | orchestrator | 2026-04-06 02:57:07.801192 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-06 02:57:07.801199 | orchestrator | Monday 06 April 2026 02:56:54 +0000 (0:00:01.119) 0:03:56.614 ********** 2026-04-06 02:57:07.801214 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.801221 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.801228 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.801235 | orchestrator | 2026-04-06 02:57:07.801242 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-06 02:57:07.801250 | orchestrator | Monday 06 April 2026 02:56:54 +0000 (0:00:00.418) 0:03:57.032 ********** 2026-04-06 02:57:07.801257 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.801264 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.801271 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.801278 | orchestrator | 2026-04-06 02:57:07.801285 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-06 02:57:07.801292 | orchestrator | Monday 06 April 2026 02:56:55 +0000 (0:00:00.673) 0:03:57.706 ********** 2026-04-06 02:57:07.801299 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:07.801307 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:57:07.801313 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:57:07.801321 | orchestrator | 2026-04-06 02:57:07.801328 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-06 02:57:07.801335 | orchestrator | Monday 06 April 2026 02:56:56 +0000 (0:00:01.337) 0:03:59.044 ********** 2026-04-06 02:57:07.801342 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:07.801349 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:57:07.801357 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:57:07.801364 | orchestrator | 2026-04-06 02:57:07.801371 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-06 02:57:07.801378 | orchestrator | Monday 06 April 2026 02:56:57 +0000 (0:00:00.853) 0:03:59.897 ********** 2026-04-06 02:57:07.801385 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:07.801392 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:57:07.801399 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:57:07.801406 | orchestrator | 2026-04-06 02:57:07.801413 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-06 02:57:07.801420 | orchestrator | Monday 06 April 2026 02:56:58 +0000 (0:00:00.634) 0:04:00.532 ********** 2026-04-06 02:57:07.801427 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.801434 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.801441 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.801449 | orchestrator | 2026-04-06 02:57:07.801456 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-06 02:57:07.801478 | orchestrator | Monday 06 April 2026 02:56:59 +0000 (0:00:01.035) 0:04:01.568 ********** 2026-04-06 02:57:07.801486 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:07.801493 | orchestrator | 2026-04-06 02:57:07.801500 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-06 02:57:07.801507 | orchestrator | Monday 06 April 2026 02:57:00 +0000 (0:00:01.269) 0:04:02.837 ********** 2026-04-06 02:57:07.801514 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.801522 | orchestrator | 2026-04-06 02:57:07.801528 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-06 02:57:07.801535 | orchestrator | Monday 06 April 2026 02:57:01 +0000 (0:00:00.722) 0:04:03.560 ********** 2026-04-06 02:57:07.801542 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-06 02:57:07.801549 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 02:57:07.801556 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 02:57:07.801563 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-06 02:57:07.801571 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-06 02:57:07.801578 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-06 02:57:07.801584 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 02:57:07.801591 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-06 02:57:07.801598 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 02:57:07.801611 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-06 02:57:07.801618 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-06 02:57:07.801625 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-06 02:57:07.801632 | orchestrator | 2026-04-06 02:57:07.801639 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-06 02:57:07.801646 | orchestrator | Monday 06 April 2026 02:57:04 +0000 (0:00:02.976) 0:04:06.536 ********** 2026-04-06 02:57:07.801653 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:07.801660 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:57:07.801671 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:57:07.801678 | orchestrator | 2026-04-06 02:57:07.801685 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-06 02:57:07.801692 | orchestrator | Monday 06 April 2026 02:57:05 +0000 (0:00:01.151) 0:04:07.687 ********** 2026-04-06 02:57:07.801699 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.801706 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.801714 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.801720 | orchestrator | 2026-04-06 02:57:07.801727 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-06 02:57:07.801734 | orchestrator | Monday 06 April 2026 02:57:05 +0000 (0:00:00.646) 0:04:08.334 ********** 2026-04-06 02:57:07.801741 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:07.801748 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:07.801755 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:07.801762 | orchestrator | 2026-04-06 02:57:07.801769 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-06 02:57:07.801776 | orchestrator | Monday 06 April 2026 02:57:06 +0000 (0:00:00.391) 0:04:08.726 ********** 2026-04-06 02:57:07.801783 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:07.801790 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:57:07.801797 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:57:07.801804 | orchestrator | 2026-04-06 02:57:07.801815 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-06 02:57:48.701597 | orchestrator | Monday 06 April 2026 02:57:07 +0000 (0:00:01.449) 0:04:10.176 ********** 2026-04-06 02:57:48.701686 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:48.701699 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:57:48.701709 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:57:48.701718 | orchestrator | 2026-04-06 02:57:48.701728 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-06 02:57:48.701737 | orchestrator | Monday 06 April 2026 02:57:09 +0000 (0:00:01.392) 0:04:11.569 ********** 2026-04-06 02:57:48.701746 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:48.701755 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:48.701764 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:48.701773 | orchestrator | 2026-04-06 02:57:48.701782 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-06 02:57:48.701791 | orchestrator | Monday 06 April 2026 02:57:09 +0000 (0:00:00.692) 0:04:12.261 ********** 2026-04-06 02:57:48.701800 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:57:48.701809 | orchestrator | 2026-04-06 02:57:48.701819 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-06 02:57:48.701828 | orchestrator | Monday 06 April 2026 02:57:10 +0000 (0:00:00.653) 0:04:12.915 ********** 2026-04-06 02:57:48.701837 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:48.701846 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:48.701855 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:48.701864 | orchestrator | 2026-04-06 02:57:48.701872 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-06 02:57:48.701881 | orchestrator | Monday 06 April 2026 02:57:10 +0000 (0:00:00.354) 0:04:13.270 ********** 2026-04-06 02:57:48.701890 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:48.701919 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:48.701928 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:48.701937 | orchestrator | 2026-04-06 02:57:48.701946 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-06 02:57:48.701968 | orchestrator | Monday 06 April 2026 02:57:11 +0000 (0:00:00.633) 0:04:13.904 ********** 2026-04-06 02:57:48.701977 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:57:48.701987 | orchestrator | 2026-04-06 02:57:48.702005 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-06 02:57:48.702013 | orchestrator | Monday 06 April 2026 02:57:12 +0000 (0:00:00.621) 0:04:14.526 ********** 2026-04-06 02:57:48.702091 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:48.702100 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:57:48.702109 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:57:48.702118 | orchestrator | 2026-04-06 02:57:48.702128 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-06 02:57:48.702139 | orchestrator | Monday 06 April 2026 02:57:14 +0000 (0:00:01.975) 0:04:16.501 ********** 2026-04-06 02:57:48.702149 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:48.702159 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:57:48.702168 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:57:48.702178 | orchestrator | 2026-04-06 02:57:48.702188 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-06 02:57:48.702198 | orchestrator | Monday 06 April 2026 02:57:15 +0000 (0:00:01.462) 0:04:17.964 ********** 2026-04-06 02:57:48.702208 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:48.702218 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:57:48.702228 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:57:48.702238 | orchestrator | 2026-04-06 02:57:48.702247 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-06 02:57:48.702258 | orchestrator | Monday 06 April 2026 02:57:17 +0000 (0:00:01.760) 0:04:19.724 ********** 2026-04-06 02:57:48.702268 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:57:48.702278 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:57:48.702288 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:57:48.702298 | orchestrator | 2026-04-06 02:57:48.702307 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-06 02:57:48.702318 | orchestrator | Monday 06 April 2026 02:57:19 +0000 (0:00:01.875) 0:04:21.600 ********** 2026-04-06 02:57:48.702328 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:57:48.702338 | orchestrator | 2026-04-06 02:57:48.702348 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-06 02:57:48.702358 | orchestrator | Monday 06 April 2026 02:57:20 +0000 (0:00:00.916) 0:04:22.516 ********** 2026-04-06 02:57:48.702368 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:48.702379 | orchestrator | 2026-04-06 02:57:48.702414 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-06 02:57:48.702424 | orchestrator | Monday 06 April 2026 02:57:21 +0000 (0:00:01.146) 0:04:23.663 ********** 2026-04-06 02:57:48.702445 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:48.702455 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:48.702466 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:48.702476 | orchestrator | 2026-04-06 02:57:48.702502 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-06 02:57:48.702515 | orchestrator | Monday 06 April 2026 02:57:29 +0000 (0:00:08.397) 0:04:32.061 ********** 2026-04-06 02:57:48.702530 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:48.702545 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:48.702559 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:48.702572 | orchestrator | 2026-04-06 02:57:48.702587 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-06 02:57:48.702614 | orchestrator | Monday 06 April 2026 02:57:30 +0000 (0:00:00.352) 0:04:32.413 ********** 2026-04-06 02:57:48.702648 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3b0d357fd4a19a5f96a69714736186a01bc2336b'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-06 02:57:48.702664 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3b0d357fd4a19a5f96a69714736186a01bc2336b'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-06 02:57:48.702680 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3b0d357fd4a19a5f96a69714736186a01bc2336b'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-06 02:57:48.702694 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3b0d357fd4a19a5f96a69714736186a01bc2336b'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-06 02:57:48.702708 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3b0d357fd4a19a5f96a69714736186a01bc2336b'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-06 02:57:48.702725 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3b0d357fd4a19a5f96a69714736186a01bc2336b'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__3b0d357fd4a19a5f96a69714736186a01bc2336b'}])  2026-04-06 02:57:48.702741 | orchestrator | 2026-04-06 02:57:48.702754 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 02:57:48.702769 | orchestrator | Monday 06 April 2026 02:57:44 +0000 (0:00:14.579) 0:04:46.993 ********** 2026-04-06 02:57:48.702783 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:48.702798 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:48.702813 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:48.702827 | orchestrator | 2026-04-06 02:57:48.702841 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-06 02:57:48.702856 | orchestrator | Monday 06 April 2026 02:57:44 +0000 (0:00:00.396) 0:04:47.390 ********** 2026-04-06 02:57:48.702870 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:57:48.702884 | orchestrator | 2026-04-06 02:57:48.702896 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-06 02:57:48.702905 | orchestrator | Monday 06 April 2026 02:57:45 +0000 (0:00:00.882) 0:04:48.272 ********** 2026-04-06 02:57:48.702914 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:48.702923 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:48.702931 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:48.702940 | orchestrator | 2026-04-06 02:57:48.702949 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-06 02:57:48.702958 | orchestrator | Monday 06 April 2026 02:57:46 +0000 (0:00:00.403) 0:04:48.675 ********** 2026-04-06 02:57:48.702975 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:48.702984 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:57:48.702999 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:57:48.703008 | orchestrator | 2026-04-06 02:57:48.703017 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-06 02:57:48.703025 | orchestrator | Monday 06 April 2026 02:57:46 +0000 (0:00:00.417) 0:04:49.092 ********** 2026-04-06 02:57:48.703034 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 02:57:48.703043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 02:57:48.703052 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 02:57:48.703060 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:57:48.703069 | orchestrator | 2026-04-06 02:57:48.703078 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-06 02:57:48.703086 | orchestrator | Monday 06 April 2026 02:57:47 +0000 (0:00:01.062) 0:04:50.155 ********** 2026-04-06 02:57:48.703095 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:57:48.703104 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:57:48.703112 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:57:48.703121 | orchestrator | 2026-04-06 02:57:48.703129 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-06 02:57:48.703138 | orchestrator | 2026-04-06 02:57:48.703147 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 02:57:48.703164 | orchestrator | Monday 06 April 2026 02:57:48 +0000 (0:00:00.933) 0:04:51.088 ********** 2026-04-06 02:58:17.066946 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:58:17.067085 | orchestrator | 2026-04-06 02:58:17.067105 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 02:58:17.067119 | orchestrator | Monday 06 April 2026 02:57:49 +0000 (0:00:00.610) 0:04:51.699 ********** 2026-04-06 02:58:17.067130 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:58:17.067141 | orchestrator | 2026-04-06 02:58:17.067153 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 02:58:17.067165 | orchestrator | Monday 06 April 2026 02:57:50 +0000 (0:00:00.844) 0:04:52.544 ********** 2026-04-06 02:58:17.067177 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:58:17.067188 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:58:17.067199 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:58:17.067210 | orchestrator | 2026-04-06 02:58:17.067221 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 02:58:17.067233 | orchestrator | Monday 06 April 2026 02:57:50 +0000 (0:00:00.756) 0:04:53.300 ********** 2026-04-06 02:58:17.067243 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.067252 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.067259 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.067266 | orchestrator | 2026-04-06 02:58:17.067273 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 02:58:17.067280 | orchestrator | Monday 06 April 2026 02:57:51 +0000 (0:00:00.328) 0:04:53.628 ********** 2026-04-06 02:58:17.067287 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.067294 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.067301 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.067307 | orchestrator | 2026-04-06 02:58:17.067314 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 02:58:17.067321 | orchestrator | Monday 06 April 2026 02:57:51 +0000 (0:00:00.657) 0:04:54.286 ********** 2026-04-06 02:58:17.067328 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.067335 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.067342 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.067349 | orchestrator | 2026-04-06 02:58:17.067356 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 02:58:17.067382 | orchestrator | Monday 06 April 2026 02:57:52 +0000 (0:00:00.370) 0:04:54.657 ********** 2026-04-06 02:58:17.067389 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:58:17.067396 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:58:17.067403 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:58:17.067409 | orchestrator | 2026-04-06 02:58:17.067416 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 02:58:17.067423 | orchestrator | Monday 06 April 2026 02:57:53 +0000 (0:00:00.766) 0:04:55.423 ********** 2026-04-06 02:58:17.067430 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.067437 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.067444 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.067450 | orchestrator | 2026-04-06 02:58:17.067457 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 02:58:17.067464 | orchestrator | Monday 06 April 2026 02:57:53 +0000 (0:00:00.369) 0:04:55.792 ********** 2026-04-06 02:58:17.067471 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.067478 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.067485 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.067492 | orchestrator | 2026-04-06 02:58:17.067500 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 02:58:17.067583 | orchestrator | Monday 06 April 2026 02:57:54 +0000 (0:00:00.633) 0:04:56.426 ********** 2026-04-06 02:58:17.067596 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:58:17.067608 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:58:17.067620 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:58:17.067631 | orchestrator | 2026-04-06 02:58:17.067642 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 02:58:17.067652 | orchestrator | Monday 06 April 2026 02:57:54 +0000 (0:00:00.780) 0:04:57.206 ********** 2026-04-06 02:58:17.067663 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:58:17.067675 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:58:17.067685 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:58:17.067697 | orchestrator | 2026-04-06 02:58:17.067706 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 02:58:17.067713 | orchestrator | Monday 06 April 2026 02:57:55 +0000 (0:00:00.837) 0:04:58.043 ********** 2026-04-06 02:58:17.067720 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.067727 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.067734 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.067740 | orchestrator | 2026-04-06 02:58:17.067760 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 02:58:17.067766 | orchestrator | Monday 06 April 2026 02:57:55 +0000 (0:00:00.338) 0:04:58.382 ********** 2026-04-06 02:58:17.067773 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:58:17.067780 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:58:17.067787 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:58:17.067793 | orchestrator | 2026-04-06 02:58:17.067800 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 02:58:17.067807 | orchestrator | Monday 06 April 2026 02:57:56 +0000 (0:00:00.678) 0:04:59.060 ********** 2026-04-06 02:58:17.067813 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.067820 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.067826 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.067833 | orchestrator | 2026-04-06 02:58:17.067842 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 02:58:17.067854 | orchestrator | Monday 06 April 2026 02:57:57 +0000 (0:00:00.363) 0:04:59.424 ********** 2026-04-06 02:58:17.067865 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.067875 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.067882 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.067888 | orchestrator | 2026-04-06 02:58:17.067895 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 02:58:17.067918 | orchestrator | Monday 06 April 2026 02:57:57 +0000 (0:00:00.367) 0:04:59.792 ********** 2026-04-06 02:58:17.067935 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.067942 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.067948 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.067955 | orchestrator | 2026-04-06 02:58:17.067961 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 02:58:17.067968 | orchestrator | Monday 06 April 2026 02:57:57 +0000 (0:00:00.344) 0:05:00.136 ********** 2026-04-06 02:58:17.067975 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.067981 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.067988 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.067995 | orchestrator | 2026-04-06 02:58:17.068001 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 02:58:17.068008 | orchestrator | Monday 06 April 2026 02:57:58 +0000 (0:00:00.661) 0:05:00.798 ********** 2026-04-06 02:58:17.068014 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.068021 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.068028 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.068034 | orchestrator | 2026-04-06 02:58:17.068041 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 02:58:17.068048 | orchestrator | Monday 06 April 2026 02:57:58 +0000 (0:00:00.361) 0:05:01.159 ********** 2026-04-06 02:58:17.068054 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:58:17.068061 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:58:17.068068 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:58:17.068074 | orchestrator | 2026-04-06 02:58:17.068081 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 02:58:17.068088 | orchestrator | Monday 06 April 2026 02:57:59 +0000 (0:00:00.346) 0:05:01.506 ********** 2026-04-06 02:58:17.068094 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:58:17.068101 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:58:17.068107 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:58:17.068114 | orchestrator | 2026-04-06 02:58:17.068121 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 02:58:17.068131 | orchestrator | Monday 06 April 2026 02:57:59 +0000 (0:00:00.354) 0:05:01.860 ********** 2026-04-06 02:58:17.068143 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:58:17.068154 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:58:17.068161 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:58:17.068168 | orchestrator | 2026-04-06 02:58:17.068175 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-06 02:58:17.068182 | orchestrator | Monday 06 April 2026 02:58:00 +0000 (0:00:00.919) 0:05:02.780 ********** 2026-04-06 02:58:17.068189 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 02:58:17.068195 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 02:58:17.068203 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 02:58:17.068209 | orchestrator | 2026-04-06 02:58:17.068216 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-06 02:58:17.068223 | orchestrator | Monday 06 April 2026 02:58:01 +0000 (0:00:00.693) 0:05:03.474 ********** 2026-04-06 02:58:17.068230 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:58:17.068236 | orchestrator | 2026-04-06 02:58:17.068243 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-06 02:58:17.068250 | orchestrator | Monday 06 April 2026 02:58:01 +0000 (0:00:00.863) 0:05:04.338 ********** 2026-04-06 02:58:17.068256 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:58:17.068263 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:58:17.068270 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:58:17.068276 | orchestrator | 2026-04-06 02:58:17.068283 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-06 02:58:17.068290 | orchestrator | Monday 06 April 2026 02:58:02 +0000 (0:00:00.738) 0:05:05.076 ********** 2026-04-06 02:58:17.068302 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:58:17.068309 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:58:17.068316 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:58:17.068322 | orchestrator | 2026-04-06 02:58:17.068329 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-06 02:58:17.068336 | orchestrator | Monday 06 April 2026 02:58:03 +0000 (0:00:00.375) 0:05:05.451 ********** 2026-04-06 02:58:17.068343 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-06 02:58:17.068350 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-06 02:58:17.068357 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-06 02:58:17.068363 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-06 02:58:17.068370 | orchestrator | 2026-04-06 02:58:17.068377 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-06 02:58:17.068388 | orchestrator | Monday 06 April 2026 02:58:13 +0000 (0:00:10.789) 0:05:16.241 ********** 2026-04-06 02:58:17.068395 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:58:17.068401 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:58:17.068408 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:58:17.068415 | orchestrator | 2026-04-06 02:58:17.068421 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-06 02:58:17.068428 | orchestrator | Monday 06 April 2026 02:58:14 +0000 (0:00:00.393) 0:05:16.635 ********** 2026-04-06 02:58:17.068435 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-06 02:58:17.068442 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-06 02:58:17.068448 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-06 02:58:17.068455 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 02:58:17.068462 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-06 02:58:17.068468 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 02:58:17.068475 | orchestrator | 2026-04-06 02:58:17.068481 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-06 02:58:17.068488 | orchestrator | Monday 06 April 2026 02:58:16 +0000 (0:00:02.579) 0:05:19.214 ********** 2026-04-06 02:58:17.068495 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-06 02:58:17.068533 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-06 02:59:16.345056 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-06 02:59:16.345215 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-06 02:59:16.345234 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-06 02:59:16.345246 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-06 02:59:16.345258 | orchestrator | 2026-04-06 02:59:16.345270 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-06 02:59:16.345283 | orchestrator | Monday 06 April 2026 02:58:18 +0000 (0:00:01.326) 0:05:20.541 ********** 2026-04-06 02:59:16.345299 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:59:16.345330 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:59:16.345352 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:59:16.345371 | orchestrator | 2026-04-06 02:59:16.345388 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-06 02:59:16.345406 | orchestrator | Monday 06 April 2026 02:58:18 +0000 (0:00:00.726) 0:05:21.268 ********** 2026-04-06 02:59:16.345426 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:59:16.345446 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:59:16.345465 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:59:16.345483 | orchestrator | 2026-04-06 02:59:16.345504 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-06 02:59:16.345524 | orchestrator | Monday 06 April 2026 02:58:19 +0000 (0:00:00.329) 0:05:21.597 ********** 2026-04-06 02:59:16.345573 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:59:16.345588 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:59:16.345627 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:59:16.345641 | orchestrator | 2026-04-06 02:59:16.345654 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-06 02:59:16.345666 | orchestrator | Monday 06 April 2026 02:58:19 +0000 (0:00:00.636) 0:05:22.234 ********** 2026-04-06 02:59:16.345679 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:59:16.345692 | orchestrator | 2026-04-06 02:59:16.345704 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-06 02:59:16.345717 | orchestrator | Monday 06 April 2026 02:58:20 +0000 (0:00:00.595) 0:05:22.829 ********** 2026-04-06 02:59:16.345731 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:59:16.345744 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:59:16.345757 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:59:16.345769 | orchestrator | 2026-04-06 02:59:16.345782 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-06 02:59:16.345795 | orchestrator | Monday 06 April 2026 02:58:20 +0000 (0:00:00.352) 0:05:23.182 ********** 2026-04-06 02:59:16.345807 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:59:16.345819 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:59:16.345831 | orchestrator | skipping: [testbed-node-2] 2026-04-06 02:59:16.345844 | orchestrator | 2026-04-06 02:59:16.345856 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-06 02:59:16.345868 | orchestrator | Monday 06 April 2026 02:58:21 +0000 (0:00:00.652) 0:05:23.835 ********** 2026-04-06 02:59:16.345882 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:59:16.345905 | orchestrator | 2026-04-06 02:59:16.345924 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-06 02:59:16.345946 | orchestrator | Monday 06 April 2026 02:58:22 +0000 (0:00:00.591) 0:05:24.427 ********** 2026-04-06 02:59:16.345959 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:59:16.345970 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:59:16.345981 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:59:16.345992 | orchestrator | 2026-04-06 02:59:16.346003 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-06 02:59:16.346013 | orchestrator | Monday 06 April 2026 02:58:23 +0000 (0:00:01.364) 0:05:25.791 ********** 2026-04-06 02:59:16.346089 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:59:16.346101 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:59:16.346112 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:59:16.346123 | orchestrator | 2026-04-06 02:59:16.346134 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-06 02:59:16.346145 | orchestrator | Monday 06 April 2026 02:58:24 +0000 (0:00:01.562) 0:05:27.354 ********** 2026-04-06 02:59:16.346155 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:59:16.346166 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:59:16.346177 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:59:16.346188 | orchestrator | 2026-04-06 02:59:16.346199 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-06 02:59:16.346210 | orchestrator | Monday 06 April 2026 02:58:27 +0000 (0:00:02.651) 0:05:30.005 ********** 2026-04-06 02:59:16.346221 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:59:16.346246 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:59:16.346257 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:59:16.346268 | orchestrator | 2026-04-06 02:59:16.346279 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-06 02:59:16.346290 | orchestrator | Monday 06 April 2026 02:58:30 +0000 (0:00:02.943) 0:05:32.949 ********** 2026-04-06 02:59:16.346301 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:59:16.346312 | orchestrator | skipping: [testbed-node-1] 2026-04-06 02:59:16.346323 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-06 02:59:16.346334 | orchestrator | 2026-04-06 02:59:16.346344 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-06 02:59:16.346367 | orchestrator | Monday 06 April 2026 02:58:31 +0000 (0:00:00.786) 0:05:33.736 ********** 2026-04-06 02:59:16.346378 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-06 02:59:16.346390 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-06 02:59:16.346401 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-06 02:59:16.346443 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-06 02:59:16.346463 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-06 02:59:16.346482 | orchestrator | 2026-04-06 02:59:16.346502 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-06 02:59:16.346521 | orchestrator | Monday 06 April 2026 02:58:55 +0000 (0:00:24.634) 0:05:58.370 ********** 2026-04-06 02:59:16.346566 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-06 02:59:16.346586 | orchestrator | 2026-04-06 02:59:16.346605 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-06 02:59:16.346623 | orchestrator | Monday 06 April 2026 02:58:57 +0000 (0:00:01.359) 0:05:59.730 ********** 2026-04-06 02:59:16.346641 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:59:16.346652 | orchestrator | 2026-04-06 02:59:16.346663 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-06 02:59:16.346674 | orchestrator | Monday 06 April 2026 02:58:57 +0000 (0:00:00.372) 0:06:00.102 ********** 2026-04-06 02:59:16.346685 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:59:16.346696 | orchestrator | 2026-04-06 02:59:16.346706 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-06 02:59:16.346717 | orchestrator | Monday 06 April 2026 02:58:57 +0000 (0:00:00.158) 0:06:00.260 ********** 2026-04-06 02:59:16.346728 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-06 02:59:16.346739 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-06 02:59:16.346749 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-06 02:59:16.346760 | orchestrator | 2026-04-06 02:59:16.346771 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-06 02:59:16.346782 | orchestrator | Monday 06 April 2026 02:59:04 +0000 (0:00:06.882) 0:06:07.143 ********** 2026-04-06 02:59:16.346793 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-06 02:59:16.346804 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-06 02:59:16.346814 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-06 02:59:16.346825 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-06 02:59:16.346836 | orchestrator | 2026-04-06 02:59:16.346847 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 02:59:16.346857 | orchestrator | Monday 06 April 2026 02:59:09 +0000 (0:00:05.149) 0:06:12.292 ********** 2026-04-06 02:59:16.346868 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:59:16.346879 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:59:16.346890 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:59:16.346900 | orchestrator | 2026-04-06 02:59:16.346911 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-06 02:59:16.346922 | orchestrator | Monday 06 April 2026 02:59:10 +0000 (0:00:00.734) 0:06:13.026 ********** 2026-04-06 02:59:16.346933 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 02:59:16.346944 | orchestrator | 2026-04-06 02:59:16.346955 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-06 02:59:16.346966 | orchestrator | Monday 06 April 2026 02:59:11 +0000 (0:00:00.594) 0:06:13.621 ********** 2026-04-06 02:59:16.346985 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:59:16.346996 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:59:16.347007 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:59:16.347017 | orchestrator | 2026-04-06 02:59:16.347028 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-06 02:59:16.347039 | orchestrator | Monday 06 April 2026 02:59:11 +0000 (0:00:00.664) 0:06:14.286 ********** 2026-04-06 02:59:16.347049 | orchestrator | changed: [testbed-node-0] 2026-04-06 02:59:16.347060 | orchestrator | changed: [testbed-node-1] 2026-04-06 02:59:16.347071 | orchestrator | changed: [testbed-node-2] 2026-04-06 02:59:16.347082 | orchestrator | 2026-04-06 02:59:16.347093 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-06 02:59:16.347104 | orchestrator | Monday 06 April 2026 02:59:13 +0000 (0:00:01.274) 0:06:15.560 ********** 2026-04-06 02:59:16.347114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 02:59:16.347125 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 02:59:16.347136 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 02:59:16.347146 | orchestrator | skipping: [testbed-node-0] 2026-04-06 02:59:16.347157 | orchestrator | 2026-04-06 02:59:16.347175 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-06 02:59:16.347186 | orchestrator | Monday 06 April 2026 02:59:13 +0000 (0:00:00.737) 0:06:16.297 ********** 2026-04-06 02:59:16.347197 | orchestrator | ok: [testbed-node-0] 2026-04-06 02:59:16.347208 | orchestrator | ok: [testbed-node-1] 2026-04-06 02:59:16.347219 | orchestrator | ok: [testbed-node-2] 2026-04-06 02:59:16.347230 | orchestrator | 2026-04-06 02:59:16.347241 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-06 02:59:16.347251 | orchestrator | 2026-04-06 02:59:16.347262 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 02:59:16.347273 | orchestrator | Monday 06 April 2026 02:59:14 +0000 (0:00:00.920) 0:06:17.218 ********** 2026-04-06 02:59:16.347284 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:59:16.347297 | orchestrator | 2026-04-06 02:59:16.347308 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 02:59:16.347319 | orchestrator | Monday 06 April 2026 02:59:15 +0000 (0:00:00.615) 0:06:17.834 ********** 2026-04-06 02:59:16.347329 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:59:16.347340 | orchestrator | 2026-04-06 02:59:16.347360 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 02:59:34.124741 | orchestrator | Monday 06 April 2026 02:59:16 +0000 (0:00:00.895) 0:06:18.730 ********** 2026-04-06 02:59:34.124846 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.124858 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.124866 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.124873 | orchestrator | 2026-04-06 02:59:34.124882 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 02:59:34.124890 | orchestrator | Monday 06 April 2026 02:59:16 +0000 (0:00:00.385) 0:06:19.116 ********** 2026-04-06 02:59:34.124897 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.124906 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.124913 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.124921 | orchestrator | 2026-04-06 02:59:34.124928 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 02:59:34.124936 | orchestrator | Monday 06 April 2026 02:59:17 +0000 (0:00:00.708) 0:06:19.825 ********** 2026-04-06 02:59:34.124943 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.124951 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.124958 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.124965 | orchestrator | 2026-04-06 02:59:34.124973 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 02:59:34.124999 | orchestrator | Monday 06 April 2026 02:59:18 +0000 (0:00:00.719) 0:06:20.544 ********** 2026-04-06 02:59:34.125007 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.125015 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.125022 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.125029 | orchestrator | 2026-04-06 02:59:34.125037 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 02:59:34.125044 | orchestrator | Monday 06 April 2026 02:59:19 +0000 (0:00:00.999) 0:06:21.543 ********** 2026-04-06 02:59:34.125051 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.125059 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.125066 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.125074 | orchestrator | 2026-04-06 02:59:34.125081 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 02:59:34.125088 | orchestrator | Monday 06 April 2026 02:59:19 +0000 (0:00:00.349) 0:06:21.893 ********** 2026-04-06 02:59:34.125096 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.125103 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.125110 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.125118 | orchestrator | 2026-04-06 02:59:34.125125 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 02:59:34.125132 | orchestrator | Monday 06 April 2026 02:59:19 +0000 (0:00:00.345) 0:06:22.238 ********** 2026-04-06 02:59:34.125140 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.125147 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.125154 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.125161 | orchestrator | 2026-04-06 02:59:34.125169 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 02:59:34.125176 | orchestrator | Monday 06 April 2026 02:59:20 +0000 (0:00:00.350) 0:06:22.589 ********** 2026-04-06 02:59:34.125183 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.125191 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.125198 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.125206 | orchestrator | 2026-04-06 02:59:34.125213 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 02:59:34.125220 | orchestrator | Monday 06 April 2026 02:59:21 +0000 (0:00:01.086) 0:06:23.675 ********** 2026-04-06 02:59:34.125227 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.125235 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.125242 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.125249 | orchestrator | 2026-04-06 02:59:34.125257 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 02:59:34.125266 | orchestrator | Monday 06 April 2026 02:59:22 +0000 (0:00:00.731) 0:06:24.407 ********** 2026-04-06 02:59:34.125275 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.125283 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.125292 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.125299 | orchestrator | 2026-04-06 02:59:34.125309 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 02:59:34.125318 | orchestrator | Monday 06 April 2026 02:59:22 +0000 (0:00:00.388) 0:06:24.796 ********** 2026-04-06 02:59:34.125326 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.125335 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.125343 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.125351 | orchestrator | 2026-04-06 02:59:34.125360 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 02:59:34.125368 | orchestrator | Monday 06 April 2026 02:59:22 +0000 (0:00:00.339) 0:06:25.135 ********** 2026-04-06 02:59:34.125376 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.125384 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.125409 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.125417 | orchestrator | 2026-04-06 02:59:34.125426 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 02:59:34.125435 | orchestrator | Monday 06 April 2026 02:59:23 +0000 (0:00:00.688) 0:06:25.824 ********** 2026-04-06 02:59:34.125449 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.125457 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.125466 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.125474 | orchestrator | 2026-04-06 02:59:34.125483 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 02:59:34.125491 | orchestrator | Monday 06 April 2026 02:59:23 +0000 (0:00:00.381) 0:06:26.205 ********** 2026-04-06 02:59:34.125499 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.125508 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.125516 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.125524 | orchestrator | 2026-04-06 02:59:34.125534 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 02:59:34.125564 | orchestrator | Monday 06 April 2026 02:59:24 +0000 (0:00:00.363) 0:06:26.569 ********** 2026-04-06 02:59:34.125574 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.125582 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.125589 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.125596 | orchestrator | 2026-04-06 02:59:34.125604 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 02:59:34.125612 | orchestrator | Monday 06 April 2026 02:59:24 +0000 (0:00:00.377) 0:06:26.946 ********** 2026-04-06 02:59:34.125643 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.125655 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.125667 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.125678 | orchestrator | 2026-04-06 02:59:34.125689 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 02:59:34.125700 | orchestrator | Monday 06 April 2026 02:59:25 +0000 (0:00:00.664) 0:06:27.611 ********** 2026-04-06 02:59:34.125711 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.125721 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.125733 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.125745 | orchestrator | 2026-04-06 02:59:34.125757 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 02:59:34.125766 | orchestrator | Monday 06 April 2026 02:59:25 +0000 (0:00:00.361) 0:06:27.972 ********** 2026-04-06 02:59:34.125773 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.125781 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.125788 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.125797 | orchestrator | 2026-04-06 02:59:34.125809 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 02:59:34.125824 | orchestrator | Monday 06 April 2026 02:59:25 +0000 (0:00:00.358) 0:06:28.331 ********** 2026-04-06 02:59:34.125841 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.125853 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.125864 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.125876 | orchestrator | 2026-04-06 02:59:34.125887 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-06 02:59:34.125898 | orchestrator | Monday 06 April 2026 02:59:26 +0000 (0:00:00.872) 0:06:29.203 ********** 2026-04-06 02:59:34.125909 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.125920 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.125932 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.125943 | orchestrator | 2026-04-06 02:59:34.125954 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-06 02:59:34.125965 | orchestrator | Monday 06 April 2026 02:59:27 +0000 (0:00:00.377) 0:06:29.581 ********** 2026-04-06 02:59:34.125976 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 02:59:34.125988 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 02:59:34.125999 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 02:59:34.126010 | orchestrator | 2026-04-06 02:59:34.126095 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-06 02:59:34.126124 | orchestrator | Monday 06 April 2026 02:59:27 +0000 (0:00:00.725) 0:06:30.307 ********** 2026-04-06 02:59:34.126136 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 02:59:34.126149 | orchestrator | 2026-04-06 02:59:34.126161 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-06 02:59:34.126174 | orchestrator | Monday 06 April 2026 02:59:28 +0000 (0:00:00.857) 0:06:31.164 ********** 2026-04-06 02:59:34.126186 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.126198 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.126209 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.126222 | orchestrator | 2026-04-06 02:59:34.126234 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-06 02:59:34.126246 | orchestrator | Monday 06 April 2026 02:59:29 +0000 (0:00:00.345) 0:06:31.510 ********** 2026-04-06 02:59:34.126255 | orchestrator | skipping: [testbed-node-3] 2026-04-06 02:59:34.126262 | orchestrator | skipping: [testbed-node-4] 2026-04-06 02:59:34.126270 | orchestrator | skipping: [testbed-node-5] 2026-04-06 02:59:34.126283 | orchestrator | 2026-04-06 02:59:34.126294 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-06 02:59:34.126306 | orchestrator | Monday 06 April 2026 02:59:29 +0000 (0:00:00.383) 0:06:31.894 ********** 2026-04-06 02:59:34.126318 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.126329 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.126340 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.126352 | orchestrator | 2026-04-06 02:59:34.126363 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-06 02:59:34.126374 | orchestrator | Monday 06 April 2026 02:59:30 +0000 (0:00:00.711) 0:06:32.606 ********** 2026-04-06 02:59:34.126386 | orchestrator | ok: [testbed-node-3] 2026-04-06 02:59:34.126398 | orchestrator | ok: [testbed-node-4] 2026-04-06 02:59:34.126409 | orchestrator | ok: [testbed-node-5] 2026-04-06 02:59:34.126420 | orchestrator | 2026-04-06 02:59:34.126431 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-06 02:59:34.126453 | orchestrator | Monday 06 April 2026 02:59:30 +0000 (0:00:00.664) 0:06:33.270 ********** 2026-04-06 02:59:34.126464 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-06 02:59:34.126477 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-06 02:59:34.126489 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-06 02:59:34.126500 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-06 02:59:34.126512 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-06 02:59:34.126523 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-06 02:59:34.126535 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-06 02:59:34.126612 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-06 02:59:34.126627 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-06 02:59:34.126639 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-06 02:59:34.126668 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-06 03:00:42.828429 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-06 03:00:42.828523 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-06 03:00:42.828533 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-06 03:00:42.828540 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-06 03:00:42.828567 | orchestrator | 2026-04-06 03:00:42.828574 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-06 03:00:42.828580 | orchestrator | Monday 06 April 2026 02:59:34 +0000 (0:00:03.235) 0:06:36.506 ********** 2026-04-06 03:00:42.828629 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:00:42.828637 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:00:42.828643 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:00:42.828649 | orchestrator | 2026-04-06 03:00:42.828656 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-06 03:00:42.828662 | orchestrator | Monday 06 April 2026 02:59:34 +0000 (0:00:00.363) 0:06:36.870 ********** 2026-04-06 03:00:42.828668 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:00:42.828675 | orchestrator | 2026-04-06 03:00:42.828681 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-06 03:00:42.828687 | orchestrator | Monday 06 April 2026 02:59:35 +0000 (0:00:00.891) 0:06:37.761 ********** 2026-04-06 03:00:42.828693 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-06 03:00:42.828699 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-06 03:00:42.828705 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-06 03:00:42.828712 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-06 03:00:42.828718 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-06 03:00:42.828724 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-06 03:00:42.828730 | orchestrator | 2026-04-06 03:00:42.828736 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-06 03:00:42.828742 | orchestrator | Monday 06 April 2026 02:59:36 +0000 (0:00:01.065) 0:06:38.826 ********** 2026-04-06 03:00:42.828747 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:00:42.828753 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 03:00:42.828759 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 03:00:42.828765 | orchestrator | 2026-04-06 03:00:42.828771 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-06 03:00:42.828777 | orchestrator | Monday 06 April 2026 02:59:38 +0000 (0:00:02.059) 0:06:40.886 ********** 2026-04-06 03:00:42.828783 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-06 03:00:42.828789 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 03:00:42.828795 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:00:42.828801 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-06 03:00:42.828807 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-06 03:00:42.828813 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:00:42.828819 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-06 03:00:42.828825 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-06 03:00:42.828831 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:00:42.828837 | orchestrator | 2026-04-06 03:00:42.828843 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-06 03:00:42.828849 | orchestrator | Monday 06 April 2026 02:59:39 +0000 (0:00:01.131) 0:06:42.017 ********** 2026-04-06 03:00:42.828855 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:00:42.828861 | orchestrator | 2026-04-06 03:00:42.828867 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-06 03:00:42.828873 | orchestrator | Monday 06 April 2026 02:59:41 +0000 (0:00:02.052) 0:06:44.070 ********** 2026-04-06 03:00:42.828879 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:00:42.828885 | orchestrator | 2026-04-06 03:00:42.828891 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-06 03:00:42.828909 | orchestrator | Monday 06 April 2026 02:59:42 +0000 (0:00:00.940) 0:06:45.010 ********** 2026-04-06 03:00:42.828925 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'}) 2026-04-06 03:00:42.828935 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}) 2026-04-06 03:00:42.828948 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}) 2026-04-06 03:00:42.828964 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'}) 2026-04-06 03:00:42.828974 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'}) 2026-04-06 03:00:42.828984 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'}) 2026-04-06 03:00:42.828994 | orchestrator | 2026-04-06 03:00:42.829020 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-06 03:00:42.829032 | orchestrator | Monday 06 April 2026 03:00:23 +0000 (0:00:41.288) 0:07:26.299 ********** 2026-04-06 03:00:42.829041 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:00:42.829052 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:00:42.829063 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:00:42.829074 | orchestrator | 2026-04-06 03:00:42.829085 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-06 03:00:42.829096 | orchestrator | Monday 06 April 2026 03:00:24 +0000 (0:00:00.359) 0:07:26.658 ********** 2026-04-06 03:00:42.829107 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:00:42.829117 | orchestrator | 2026-04-06 03:00:42.829128 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-06 03:00:42.829139 | orchestrator | Monday 06 April 2026 03:00:25 +0000 (0:00:00.947) 0:07:27.606 ********** 2026-04-06 03:00:42.829150 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:00:42.829161 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:00:42.829171 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:00:42.829185 | orchestrator | 2026-04-06 03:00:42.829195 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-06 03:00:42.829205 | orchestrator | Monday 06 April 2026 03:00:25 +0000 (0:00:00.711) 0:07:28.317 ********** 2026-04-06 03:00:42.829214 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:00:42.829224 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:00:42.829233 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:00:42.829242 | orchestrator | 2026-04-06 03:00:42.829251 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-06 03:00:42.829260 | orchestrator | Monday 06 April 2026 03:00:28 +0000 (0:00:02.674) 0:07:30.991 ********** 2026-04-06 03:00:42.829270 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:00:42.829279 | orchestrator | 2026-04-06 03:00:42.829289 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-06 03:00:42.829298 | orchestrator | Monday 06 April 2026 03:00:29 +0000 (0:00:00.876) 0:07:31.867 ********** 2026-04-06 03:00:42.829307 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:00:42.829316 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:00:42.829326 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:00:42.829335 | orchestrator | 2026-04-06 03:00:42.829344 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-06 03:00:42.829353 | orchestrator | Monday 06 April 2026 03:00:30 +0000 (0:00:01.192) 0:07:33.060 ********** 2026-04-06 03:00:42.829363 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:00:42.829383 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:00:42.829394 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:00:42.829404 | orchestrator | 2026-04-06 03:00:42.829413 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-06 03:00:42.829422 | orchestrator | Monday 06 April 2026 03:00:31 +0000 (0:00:01.173) 0:07:34.234 ********** 2026-04-06 03:00:42.829433 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:00:42.829439 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:00:42.829445 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:00:42.829451 | orchestrator | 2026-04-06 03:00:42.829456 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-06 03:00:42.829462 | orchestrator | Monday 06 April 2026 03:00:34 +0000 (0:00:02.193) 0:07:36.428 ********** 2026-04-06 03:00:42.829468 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:00:42.829474 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:00:42.829479 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:00:42.829485 | orchestrator | 2026-04-06 03:00:42.829491 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-06 03:00:42.829497 | orchestrator | Monday 06 April 2026 03:00:34 +0000 (0:00:00.409) 0:07:36.838 ********** 2026-04-06 03:00:42.829502 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:00:42.829508 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:00:42.829514 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:00:42.829520 | orchestrator | 2026-04-06 03:00:42.829527 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-06 03:00:42.829536 | orchestrator | Monday 06 April 2026 03:00:34 +0000 (0:00:00.402) 0:07:37.240 ********** 2026-04-06 03:00:42.829546 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-06 03:00:42.829555 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 03:00:42.829565 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-06 03:00:42.829581 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-06 03:00:42.829608 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-06 03:00:42.829618 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-06 03:00:42.829628 | orchestrator | 2026-04-06 03:00:42.829638 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-06 03:00:42.829648 | orchestrator | Monday 06 April 2026 03:00:35 +0000 (0:00:01.063) 0:07:38.303 ********** 2026-04-06 03:00:42.829658 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-06 03:00:42.829667 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-06 03:00:42.829678 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-06 03:00:42.829684 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-06 03:00:42.829690 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-06 03:00:42.829695 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-06 03:00:42.829701 | orchestrator | 2026-04-06 03:00:42.829707 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-06 03:00:42.829713 | orchestrator | Monday 06 April 2026 03:00:38 +0000 (0:00:02.531) 0:07:40.835 ********** 2026-04-06 03:00:42.829719 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-06 03:00:42.829724 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-06 03:00:42.829730 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-06 03:00:42.829736 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-06 03:00:42.829742 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-06 03:00:42.829748 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-06 03:00:42.829753 | orchestrator | 2026-04-06 03:00:42.829768 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-06 03:01:16.265098 | orchestrator | Monday 06 April 2026 03:00:42 +0000 (0:00:04.379) 0:07:45.214 ********** 2026-04-06 03:01:16.265183 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265192 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.265198 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:01:16.265221 | orchestrator | 2026-04-06 03:01:16.265227 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-06 03:01:16.265232 | orchestrator | Monday 06 April 2026 03:00:45 +0000 (0:00:02.764) 0:07:47.979 ********** 2026-04-06 03:01:16.265237 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265241 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.265246 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-06 03:01:16.265252 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:01:16.265257 | orchestrator | 2026-04-06 03:01:16.265262 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-06 03:01:16.265267 | orchestrator | Monday 06 April 2026 03:00:58 +0000 (0:00:12.891) 0:08:00.871 ********** 2026-04-06 03:01:16.265271 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265276 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.265280 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:16.265285 | orchestrator | 2026-04-06 03:01:16.265290 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 03:01:16.265295 | orchestrator | Monday 06 April 2026 03:00:59 +0000 (0:00:01.307) 0:08:02.178 ********** 2026-04-06 03:01:16.265300 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265304 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.265309 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:16.265313 | orchestrator | 2026-04-06 03:01:16.265318 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-06 03:01:16.265322 | orchestrator | Monday 06 April 2026 03:01:00 +0000 (0:00:00.364) 0:08:02.542 ********** 2026-04-06 03:01:16.265328 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:01:16.265333 | orchestrator | 2026-04-06 03:01:16.265339 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-06 03:01:16.265347 | orchestrator | Monday 06 April 2026 03:01:01 +0000 (0:00:00.956) 0:08:03.499 ********** 2026-04-06 03:01:16.265355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 03:01:16.265363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 03:01:16.265371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 03:01:16.265379 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265386 | orchestrator | 2026-04-06 03:01:16.265395 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-06 03:01:16.265402 | orchestrator | Monday 06 April 2026 03:01:01 +0000 (0:00:00.448) 0:08:03.948 ********** 2026-04-06 03:01:16.265410 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265419 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.265427 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:16.265436 | orchestrator | 2026-04-06 03:01:16.265443 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-06 03:01:16.265451 | orchestrator | Monday 06 April 2026 03:01:01 +0000 (0:00:00.345) 0:08:04.294 ********** 2026-04-06 03:01:16.265459 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265467 | orchestrator | 2026-04-06 03:01:16.265476 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-06 03:01:16.265484 | orchestrator | Monday 06 April 2026 03:01:02 +0000 (0:00:00.235) 0:08:04.529 ********** 2026-04-06 03:01:16.265492 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265498 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.265503 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:16.265507 | orchestrator | 2026-04-06 03:01:16.265512 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-06 03:01:16.265516 | orchestrator | Monday 06 April 2026 03:01:02 +0000 (0:00:00.631) 0:08:05.161 ********** 2026-04-06 03:01:16.265521 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265525 | orchestrator | 2026-04-06 03:01:16.265530 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-06 03:01:16.265551 | orchestrator | Monday 06 April 2026 03:01:03 +0000 (0:00:00.270) 0:08:05.431 ********** 2026-04-06 03:01:16.265556 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265561 | orchestrator | 2026-04-06 03:01:16.265566 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-06 03:01:16.265570 | orchestrator | Monday 06 April 2026 03:01:03 +0000 (0:00:00.254) 0:08:05.685 ********** 2026-04-06 03:01:16.265575 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265579 | orchestrator | 2026-04-06 03:01:16.265584 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-06 03:01:16.265589 | orchestrator | Monday 06 April 2026 03:01:03 +0000 (0:00:00.137) 0:08:05.823 ********** 2026-04-06 03:01:16.265593 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265598 | orchestrator | 2026-04-06 03:01:16.265653 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-06 03:01:16.265659 | orchestrator | Monday 06 April 2026 03:01:03 +0000 (0:00:00.245) 0:08:06.068 ********** 2026-04-06 03:01:16.265664 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265670 | orchestrator | 2026-04-06 03:01:16.265675 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-06 03:01:16.265680 | orchestrator | Monday 06 April 2026 03:01:03 +0000 (0:00:00.251) 0:08:06.320 ********** 2026-04-06 03:01:16.265686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 03:01:16.265691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 03:01:16.265697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 03:01:16.265702 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265708 | orchestrator | 2026-04-06 03:01:16.265726 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-06 03:01:16.265731 | orchestrator | Monday 06 April 2026 03:01:04 +0000 (0:00:00.476) 0:08:06.796 ********** 2026-04-06 03:01:16.265737 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265742 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.265747 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:16.265753 | orchestrator | 2026-04-06 03:01:16.265758 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-06 03:01:16.265763 | orchestrator | Monday 06 April 2026 03:01:04 +0000 (0:00:00.351) 0:08:07.148 ********** 2026-04-06 03:01:16.265768 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265774 | orchestrator | 2026-04-06 03:01:16.265779 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-06 03:01:16.265784 | orchestrator | Monday 06 April 2026 03:01:05 +0000 (0:00:00.259) 0:08:07.407 ********** 2026-04-06 03:01:16.265789 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265795 | orchestrator | 2026-04-06 03:01:16.265800 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-06 03:01:16.265805 | orchestrator | 2026-04-06 03:01:16.265810 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 03:01:16.265816 | orchestrator | Monday 06 April 2026 03:01:06 +0000 (0:00:01.376) 0:08:08.783 ********** 2026-04-06 03:01:16.265822 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:01:16.265829 | orchestrator | 2026-04-06 03:01:16.265834 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 03:01:16.265840 | orchestrator | Monday 06 April 2026 03:01:07 +0000 (0:00:01.370) 0:08:10.154 ********** 2026-04-06 03:01:16.265846 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:01:16.265851 | orchestrator | 2026-04-06 03:01:16.265856 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 03:01:16.265866 | orchestrator | Monday 06 April 2026 03:01:09 +0000 (0:00:01.509) 0:08:11.663 ********** 2026-04-06 03:01:16.265872 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.265877 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.265882 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:16.265888 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:16.265894 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:01:16.265899 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:01:16.265904 | orchestrator | 2026-04-06 03:01:16.265909 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 03:01:16.265915 | orchestrator | Monday 06 April 2026 03:01:10 +0000 (0:00:01.344) 0:08:13.008 ********** 2026-04-06 03:01:16.265920 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:16.265926 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:16.265931 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:16.265936 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:16.265942 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:16.265947 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:16.265952 | orchestrator | 2026-04-06 03:01:16.265958 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 03:01:16.265963 | orchestrator | Monday 06 April 2026 03:01:11 +0000 (0:00:00.743) 0:08:13.751 ********** 2026-04-06 03:01:16.265968 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:16.265974 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:16.265979 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:16.265985 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:16.265990 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:16.265995 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:16.266000 | orchestrator | 2026-04-06 03:01:16.266008 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 03:01:16.266062 | orchestrator | Monday 06 April 2026 03:01:12 +0000 (0:00:01.006) 0:08:14.757 ********** 2026-04-06 03:01:16.266073 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:16.266081 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:16.266087 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:16.266095 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:16.266102 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:16.266110 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:16.266118 | orchestrator | 2026-04-06 03:01:16.266126 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 03:01:16.266140 | orchestrator | Monday 06 April 2026 03:01:13 +0000 (0:00:00.796) 0:08:15.554 ********** 2026-04-06 03:01:16.266149 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.266154 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.266160 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:16.266165 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:16.266170 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:01:16.266176 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:01:16.266181 | orchestrator | 2026-04-06 03:01:16.266187 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 03:01:16.266192 | orchestrator | Monday 06 April 2026 03:01:14 +0000 (0:00:01.406) 0:08:16.960 ********** 2026-04-06 03:01:16.266197 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.266202 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.266208 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:16.266213 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:16.266219 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:16.266224 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:16.266229 | orchestrator | 2026-04-06 03:01:16.266234 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 03:01:16.266240 | orchestrator | Monday 06 April 2026 03:01:15 +0000 (0:00:00.681) 0:08:17.642 ********** 2026-04-06 03:01:16.266245 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:16.266251 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:16.266256 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:16.266267 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:16.266272 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:16.266277 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:16.266283 | orchestrator | 2026-04-06 03:01:16.266294 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 03:01:50.328936 | orchestrator | Monday 06 April 2026 03:01:16 +0000 (0:00:01.003) 0:08:18.645 ********** 2026-04-06 03:01:50.329048 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:50.329059 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:50.329066 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:50.329072 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:50.329078 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:01:50.329084 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:01:50.329090 | orchestrator | 2026-04-06 03:01:50.329097 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 03:01:50.329103 | orchestrator | Monday 06 April 2026 03:01:17 +0000 (0:00:01.162) 0:08:19.808 ********** 2026-04-06 03:01:50.329109 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:50.329115 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:50.329121 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:50.329127 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:50.329133 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:01:50.329139 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:01:50.329145 | orchestrator | 2026-04-06 03:01:50.329151 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 03:01:50.329157 | orchestrator | Monday 06 April 2026 03:01:18 +0000 (0:00:01.487) 0:08:21.295 ********** 2026-04-06 03:01:50.329163 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:50.329171 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:50.329177 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:50.329183 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:50.329189 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:50.329195 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:50.329200 | orchestrator | 2026-04-06 03:01:50.329206 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 03:01:50.329212 | orchestrator | Monday 06 April 2026 03:01:19 +0000 (0:00:00.752) 0:08:22.048 ********** 2026-04-06 03:01:50.329218 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:50.329224 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:50.329230 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:50.329236 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:50.329242 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:01:50.329248 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:01:50.329254 | orchestrator | 2026-04-06 03:01:50.329260 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 03:01:50.329266 | orchestrator | Monday 06 April 2026 03:01:20 +0000 (0:00:00.965) 0:08:23.014 ********** 2026-04-06 03:01:50.329272 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:50.329278 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:50.329284 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:50.329289 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:50.329295 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:50.329301 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:50.329307 | orchestrator | 2026-04-06 03:01:50.329313 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 03:01:50.329319 | orchestrator | Monday 06 April 2026 03:01:21 +0000 (0:00:00.747) 0:08:23.762 ********** 2026-04-06 03:01:50.329325 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:50.329331 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:50.329337 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:50.329343 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:50.329349 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:50.329354 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:50.329360 | orchestrator | 2026-04-06 03:01:50.329366 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 03:01:50.329391 | orchestrator | Monday 06 April 2026 03:01:22 +0000 (0:00:00.962) 0:08:24.725 ********** 2026-04-06 03:01:50.329397 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:50.329403 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:50.329409 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:50.329415 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:50.329421 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:50.329427 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:50.329432 | orchestrator | 2026-04-06 03:01:50.329438 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 03:01:50.329444 | orchestrator | Monday 06 April 2026 03:01:22 +0000 (0:00:00.667) 0:08:25.392 ********** 2026-04-06 03:01:50.329450 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:50.329456 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:50.329462 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:50.329469 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:50.329476 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:50.329483 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:50.329490 | orchestrator | 2026-04-06 03:01:50.329497 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 03:01:50.329504 | orchestrator | Monday 06 April 2026 03:01:23 +0000 (0:00:00.943) 0:08:26.335 ********** 2026-04-06 03:01:50.329511 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:50.329522 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:50.329533 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:50.329544 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:01:50.329557 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:01:50.329572 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:01:50.329582 | orchestrator | 2026-04-06 03:01:50.329590 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 03:01:50.329600 | orchestrator | Monday 06 April 2026 03:01:24 +0000 (0:00:00.704) 0:08:27.040 ********** 2026-04-06 03:01:50.329609 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:01:50.329618 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:01:50.329646 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:01:50.329655 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:50.329664 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:01:50.329674 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:01:50.329684 | orchestrator | 2026-04-06 03:01:50.329694 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 03:01:50.329704 | orchestrator | Monday 06 April 2026 03:01:25 +0000 (0:00:01.067) 0:08:28.107 ********** 2026-04-06 03:01:50.329714 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:50.329723 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:50.329732 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:50.329741 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:50.329751 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:01:50.329760 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:01:50.329770 | orchestrator | 2026-04-06 03:01:50.329798 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 03:01:50.329809 | orchestrator | Monday 06 April 2026 03:01:26 +0000 (0:00:00.736) 0:08:28.844 ********** 2026-04-06 03:01:50.329820 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:50.329830 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:50.329925 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:50.329941 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:50.329947 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:01:50.329953 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:01:50.329959 | orchestrator | 2026-04-06 03:01:50.329966 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-06 03:01:50.329972 | orchestrator | Monday 06 April 2026 03:01:28 +0000 (0:00:01.617) 0:08:30.461 ********** 2026-04-06 03:01:50.329978 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:01:50.329992 | orchestrator | 2026-04-06 03:01:50.329998 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-06 03:01:50.330004 | orchestrator | Monday 06 April 2026 03:01:31 +0000 (0:00:03.917) 0:08:34.379 ********** 2026-04-06 03:01:50.330010 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:01:50.330075 | orchestrator | 2026-04-06 03:01:50.330086 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-06 03:01:50.330096 | orchestrator | Monday 06 April 2026 03:01:34 +0000 (0:00:02.699) 0:08:37.078 ********** 2026-04-06 03:01:50.330111 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:01:50.330123 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:01:50.330133 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:01:50.330143 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:50.330153 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:01:50.330163 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:01:50.330173 | orchestrator | 2026-04-06 03:01:50.330183 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-06 03:01:50.330194 | orchestrator | Monday 06 April 2026 03:01:36 +0000 (0:00:01.532) 0:08:38.611 ********** 2026-04-06 03:01:50.330206 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:01:50.330216 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:01:50.330227 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:01:50.330235 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:01:50.330241 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:01:50.330247 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:01:50.330253 | orchestrator | 2026-04-06 03:01:50.330259 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-06 03:01:50.330265 | orchestrator | Monday 06 April 2026 03:01:37 +0000 (0:00:01.308) 0:08:39.920 ********** 2026-04-06 03:01:50.330273 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:01:50.330281 | orchestrator | 2026-04-06 03:01:50.330287 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-06 03:01:50.330293 | orchestrator | Monday 06 April 2026 03:01:38 +0000 (0:00:01.408) 0:08:41.328 ********** 2026-04-06 03:01:50.330298 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:01:50.330304 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:01:50.330310 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:01:50.330316 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:01:50.330322 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:01:50.330327 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:01:50.330333 | orchestrator | 2026-04-06 03:01:50.330339 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-06 03:01:50.330345 | orchestrator | Monday 06 April 2026 03:01:40 +0000 (0:00:01.668) 0:08:42.997 ********** 2026-04-06 03:01:50.330351 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:01:50.330356 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:01:50.330362 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:01:50.330368 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:01:50.330374 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:01:50.330379 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:01:50.330385 | orchestrator | 2026-04-06 03:01:50.330391 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-06 03:01:50.330397 | orchestrator | Monday 06 April 2026 03:01:44 +0000 (0:00:03.850) 0:08:46.848 ********** 2026-04-06 03:01:50.330403 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:01:50.330409 | orchestrator | 2026-04-06 03:01:50.330421 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-06 03:01:50.330428 | orchestrator | Monday 06 April 2026 03:01:45 +0000 (0:00:01.439) 0:08:48.287 ********** 2026-04-06 03:01:50.330433 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:50.330446 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:50.330452 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:50.330457 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:50.330463 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:01:50.330469 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:01:50.330475 | orchestrator | 2026-04-06 03:01:50.330481 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-06 03:01:50.330487 | orchestrator | Monday 06 April 2026 03:01:46 +0000 (0:00:00.745) 0:08:49.032 ********** 2026-04-06 03:01:50.330492 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:01:50.330498 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:01:50.330504 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:01:50.330510 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:01:50.330516 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:01:50.330521 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:01:50.330527 | orchestrator | 2026-04-06 03:01:50.330533 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-06 03:01:50.330539 | orchestrator | Monday 06 April 2026 03:01:49 +0000 (0:00:02.707) 0:08:51.739 ********** 2026-04-06 03:01:50.330544 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:01:50.330550 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:01:50.330556 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:01:50.330562 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:01:50.330567 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:01:50.330573 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:01:50.330579 | orchestrator | 2026-04-06 03:01:50.330595 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-06 03:02:19.473298 | orchestrator | 2026-04-06 03:02:19.473468 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 03:02:19.474211 | orchestrator | Monday 06 April 2026 03:01:50 +0000 (0:00:00.979) 0:08:52.719 ********** 2026-04-06 03:02:19.474236 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:02:19.474250 | orchestrator | 2026-04-06 03:02:19.474261 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 03:02:19.474271 | orchestrator | Monday 06 April 2026 03:01:51 +0000 (0:00:00.929) 0:08:53.649 ********** 2026-04-06 03:02:19.474280 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:02:19.474288 | orchestrator | 2026-04-06 03:02:19.474296 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 03:02:19.474305 | orchestrator | Monday 06 April 2026 03:01:52 +0000 (0:00:00.893) 0:08:54.542 ********** 2026-04-06 03:02:19.474313 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:19.474323 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:19.474331 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:19.474339 | orchestrator | 2026-04-06 03:02:19.474347 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 03:02:19.474356 | orchestrator | Monday 06 April 2026 03:01:52 +0000 (0:00:00.386) 0:08:54.929 ********** 2026-04-06 03:02:19.474364 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:19.474373 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:19.474381 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:19.474389 | orchestrator | 2026-04-06 03:02:19.474398 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 03:02:19.474406 | orchestrator | Monday 06 April 2026 03:01:53 +0000 (0:00:00.721) 0:08:55.650 ********** 2026-04-06 03:02:19.474414 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:19.474422 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:19.474430 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:19.474438 | orchestrator | 2026-04-06 03:02:19.474446 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 03:02:19.474455 | orchestrator | Monday 06 April 2026 03:01:54 +0000 (0:00:00.786) 0:08:56.437 ********** 2026-04-06 03:02:19.474489 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:19.474497 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:19.474505 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:19.474513 | orchestrator | 2026-04-06 03:02:19.474521 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 03:02:19.474529 | orchestrator | Monday 06 April 2026 03:01:55 +0000 (0:00:01.085) 0:08:57.522 ********** 2026-04-06 03:02:19.474537 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:19.474545 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:19.474553 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:19.474562 | orchestrator | 2026-04-06 03:02:19.474569 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 03:02:19.474578 | orchestrator | Monday 06 April 2026 03:01:55 +0000 (0:00:00.366) 0:08:57.888 ********** 2026-04-06 03:02:19.474586 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:19.474594 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:19.474602 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:19.474610 | orchestrator | 2026-04-06 03:02:19.474618 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 03:02:19.474626 | orchestrator | Monday 06 April 2026 03:01:55 +0000 (0:00:00.364) 0:08:58.253 ********** 2026-04-06 03:02:19.474634 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:19.474690 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:19.474699 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:19.474707 | orchestrator | 2026-04-06 03:02:19.474715 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 03:02:19.474723 | orchestrator | Monday 06 April 2026 03:01:56 +0000 (0:00:00.338) 0:08:58.592 ********** 2026-04-06 03:02:19.474731 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:19.474739 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:19.474747 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:19.474755 | orchestrator | 2026-04-06 03:02:19.474763 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 03:02:19.474771 | orchestrator | Monday 06 April 2026 03:01:57 +0000 (0:00:01.028) 0:08:59.620 ********** 2026-04-06 03:02:19.474779 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:19.474800 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:19.474809 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:19.474817 | orchestrator | 2026-04-06 03:02:19.474825 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 03:02:19.474833 | orchestrator | Monday 06 April 2026 03:01:57 +0000 (0:00:00.772) 0:09:00.392 ********** 2026-04-06 03:02:19.474841 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:19.474849 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:19.474857 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:19.474865 | orchestrator | 2026-04-06 03:02:19.474873 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 03:02:19.474881 | orchestrator | Monday 06 April 2026 03:01:58 +0000 (0:00:00.386) 0:09:00.779 ********** 2026-04-06 03:02:19.474889 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:19.474897 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:19.474905 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:19.474913 | orchestrator | 2026-04-06 03:02:19.474921 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 03:02:19.474929 | orchestrator | Monday 06 April 2026 03:01:58 +0000 (0:00:00.367) 0:09:01.146 ********** 2026-04-06 03:02:19.474937 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:19.474945 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:19.474953 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:19.474961 | orchestrator | 2026-04-06 03:02:19.474969 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 03:02:19.474977 | orchestrator | Monday 06 April 2026 03:01:59 +0000 (0:00:00.658) 0:09:01.805 ********** 2026-04-06 03:02:19.474985 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:19.474993 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:19.475008 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:19.475016 | orchestrator | 2026-04-06 03:02:19.475043 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 03:02:19.475052 | orchestrator | Monday 06 April 2026 03:01:59 +0000 (0:00:00.388) 0:09:02.194 ********** 2026-04-06 03:02:19.475060 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:19.475068 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:19.475076 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:19.475084 | orchestrator | 2026-04-06 03:02:19.475092 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 03:02:19.475100 | orchestrator | Monday 06 April 2026 03:02:00 +0000 (0:00:00.407) 0:09:02.601 ********** 2026-04-06 03:02:19.475109 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:19.475117 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:19.475125 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:19.475133 | orchestrator | 2026-04-06 03:02:19.475141 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 03:02:19.475149 | orchestrator | Monday 06 April 2026 03:02:00 +0000 (0:00:00.359) 0:09:02.961 ********** 2026-04-06 03:02:19.475157 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:19.475165 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:19.475173 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:19.475181 | orchestrator | 2026-04-06 03:02:19.475189 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 03:02:19.475197 | orchestrator | Monday 06 April 2026 03:02:01 +0000 (0:00:00.659) 0:09:03.621 ********** 2026-04-06 03:02:19.475205 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:19.475213 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:19.475221 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:19.475229 | orchestrator | 2026-04-06 03:02:19.475238 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 03:02:19.475246 | orchestrator | Monday 06 April 2026 03:02:01 +0000 (0:00:00.329) 0:09:03.950 ********** 2026-04-06 03:02:19.475254 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:19.475262 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:19.475270 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:19.475278 | orchestrator | 2026-04-06 03:02:19.475286 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 03:02:19.475294 | orchestrator | Monday 06 April 2026 03:02:01 +0000 (0:00:00.380) 0:09:04.331 ********** 2026-04-06 03:02:19.475302 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:19.475310 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:19.475318 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:19.475326 | orchestrator | 2026-04-06 03:02:19.475334 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-06 03:02:19.475342 | orchestrator | Monday 06 April 2026 03:02:02 +0000 (0:00:00.938) 0:09:05.270 ********** 2026-04-06 03:02:19.475350 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:19.475358 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:19.475367 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-06 03:02:19.475375 | orchestrator | 2026-04-06 03:02:19.475383 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-06 03:02:19.475391 | orchestrator | Monday 06 April 2026 03:02:03 +0000 (0:00:00.471) 0:09:05.741 ********** 2026-04-06 03:02:19.475400 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:02:19.475408 | orchestrator | 2026-04-06 03:02:19.475416 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-06 03:02:19.475424 | orchestrator | Monday 06 April 2026 03:02:05 +0000 (0:00:02.088) 0:09:07.830 ********** 2026-04-06 03:02:19.475434 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-06 03:02:19.475450 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:19.475458 | orchestrator | 2026-04-06 03:02:19.475466 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-06 03:02:19.475475 | orchestrator | Monday 06 April 2026 03:02:05 +0000 (0:00:00.224) 0:09:08.054 ********** 2026-04-06 03:02:19.475490 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-06 03:02:19.475505 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-06 03:02:19.475513 | orchestrator | 2026-04-06 03:02:19.475521 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-06 03:02:19.475529 | orchestrator | Monday 06 April 2026 03:02:13 +0000 (0:00:08.131) 0:09:16.185 ********** 2026-04-06 03:02:19.475537 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:02:19.475545 | orchestrator | 2026-04-06 03:02:19.475553 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-06 03:02:19.475561 | orchestrator | Monday 06 April 2026 03:02:17 +0000 (0:00:03.666) 0:09:19.851 ********** 2026-04-06 03:02:19.475569 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:02:19.475577 | orchestrator | 2026-04-06 03:02:19.475585 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-06 03:02:19.475593 | orchestrator | Monday 06 April 2026 03:02:18 +0000 (0:00:00.891) 0:09:20.743 ********** 2026-04-06 03:02:19.475601 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-06 03:02:19.475614 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-06 03:02:48.331456 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-06 03:02:48.331578 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-06 03:02:48.331597 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-06 03:02:48.331609 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-06 03:02:48.331621 | orchestrator | 2026-04-06 03:02:48.331633 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-06 03:02:48.331645 | orchestrator | Monday 06 April 2026 03:02:19 +0000 (0:00:01.119) 0:09:21.862 ********** 2026-04-06 03:02:48.331706 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:02:48.331722 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 03:02:48.331734 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 03:02:48.331747 | orchestrator | 2026-04-06 03:02:48.331760 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-06 03:02:48.331772 | orchestrator | Monday 06 April 2026 03:02:21 +0000 (0:00:02.259) 0:09:24.122 ********** 2026-04-06 03:02:48.331784 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-06 03:02:48.331797 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 03:02:48.331809 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:02:48.331821 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-06 03:02:48.331833 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-06 03:02:48.331844 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:02:48.331855 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-06 03:02:48.331866 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-06 03:02:48.331877 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:02:48.331919 | orchestrator | 2026-04-06 03:02:48.331933 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-06 03:02:48.331946 | orchestrator | Monday 06 April 2026 03:02:23 +0000 (0:00:01.479) 0:09:25.601 ********** 2026-04-06 03:02:48.331957 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:02:48.331969 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:02:48.331981 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:02:48.331994 | orchestrator | 2026-04-06 03:02:48.332008 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-06 03:02:48.332023 | orchestrator | Monday 06 April 2026 03:02:26 +0000 (0:00:03.618) 0:09:29.220 ********** 2026-04-06 03:02:48.332035 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:48.332047 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:48.332060 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:48.332072 | orchestrator | 2026-04-06 03:02:48.332083 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-06 03:02:48.332096 | orchestrator | Monday 06 April 2026 03:02:27 +0000 (0:00:00.350) 0:09:29.571 ********** 2026-04-06 03:02:48.332109 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:02:48.332121 | orchestrator | 2026-04-06 03:02:48.332133 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-06 03:02:48.332146 | orchestrator | Monday 06 April 2026 03:02:28 +0000 (0:00:00.891) 0:09:30.463 ********** 2026-04-06 03:02:48.332158 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:02:48.332173 | orchestrator | 2026-04-06 03:02:48.332185 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-06 03:02:48.332196 | orchestrator | Monday 06 April 2026 03:02:28 +0000 (0:00:00.598) 0:09:31.061 ********** 2026-04-06 03:02:48.332208 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:02:48.332221 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:02:48.332233 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:02:48.332245 | orchestrator | 2026-04-06 03:02:48.332257 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-06 03:02:48.332268 | orchestrator | Monday 06 April 2026 03:02:29 +0000 (0:00:01.260) 0:09:32.322 ********** 2026-04-06 03:02:48.332280 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:02:48.332292 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:02:48.332320 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:02:48.332334 | orchestrator | 2026-04-06 03:02:48.332345 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-06 03:02:48.332357 | orchestrator | Monday 06 April 2026 03:02:31 +0000 (0:00:01.500) 0:09:33.822 ********** 2026-04-06 03:02:48.332368 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:02:48.332380 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:02:48.332392 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:02:48.332403 | orchestrator | 2026-04-06 03:02:48.332414 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-06 03:02:48.332427 | orchestrator | Monday 06 April 2026 03:02:33 +0000 (0:00:01.964) 0:09:35.786 ********** 2026-04-06 03:02:48.332438 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:02:48.332450 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:02:48.332460 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:02:48.332471 | orchestrator | 2026-04-06 03:02:48.332483 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-06 03:02:48.332494 | orchestrator | Monday 06 April 2026 03:02:35 +0000 (0:00:02.035) 0:09:37.822 ********** 2026-04-06 03:02:48.332506 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:48.332519 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:48.332530 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:48.332541 | orchestrator | 2026-04-06 03:02:48.332552 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 03:02:48.332563 | orchestrator | Monday 06 April 2026 03:02:37 +0000 (0:00:01.675) 0:09:39.498 ********** 2026-04-06 03:02:48.332586 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:02:48.332597 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:02:48.332609 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:02:48.332620 | orchestrator | 2026-04-06 03:02:48.332679 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-06 03:02:48.332693 | orchestrator | Monday 06 April 2026 03:02:37 +0000 (0:00:00.746) 0:09:40.245 ********** 2026-04-06 03:02:48.332705 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:02:48.332715 | orchestrator | 2026-04-06 03:02:48.332726 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-06 03:02:48.332738 | orchestrator | Monday 06 April 2026 03:02:38 +0000 (0:00:00.919) 0:09:41.164 ********** 2026-04-06 03:02:48.332749 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:48.332760 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:48.332773 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:48.332784 | orchestrator | 2026-04-06 03:02:48.332794 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-06 03:02:48.332807 | orchestrator | Monday 06 April 2026 03:02:39 +0000 (0:00:00.381) 0:09:41.546 ********** 2026-04-06 03:02:48.332818 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:02:48.332830 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:02:48.332840 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:02:48.332850 | orchestrator | 2026-04-06 03:02:48.332861 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-06 03:02:48.332872 | orchestrator | Monday 06 April 2026 03:02:40 +0000 (0:00:01.260) 0:09:42.806 ********** 2026-04-06 03:02:48.332883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 03:02:48.332895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 03:02:48.332906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 03:02:48.332917 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:48.332929 | orchestrator | 2026-04-06 03:02:48.332940 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-06 03:02:48.332952 | orchestrator | Monday 06 April 2026 03:02:41 +0000 (0:00:01.034) 0:09:43.841 ********** 2026-04-06 03:02:48.332963 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:48.332974 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:48.332986 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:48.332997 | orchestrator | 2026-04-06 03:02:48.333008 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-06 03:02:48.333019 | orchestrator | 2026-04-06 03:02:48.333030 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 03:02:48.333041 | orchestrator | Monday 06 April 2026 03:02:42 +0000 (0:00:00.935) 0:09:44.776 ********** 2026-04-06 03:02:48.333052 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:02:48.333064 | orchestrator | 2026-04-06 03:02:48.333074 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 03:02:48.333085 | orchestrator | Monday 06 April 2026 03:02:42 +0000 (0:00:00.610) 0:09:45.387 ********** 2026-04-06 03:02:48.333095 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:02:48.333106 | orchestrator | 2026-04-06 03:02:48.333116 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 03:02:48.333128 | orchestrator | Monday 06 April 2026 03:02:43 +0000 (0:00:00.905) 0:09:46.292 ********** 2026-04-06 03:02:48.333138 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:48.333149 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:48.333159 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:48.333170 | orchestrator | 2026-04-06 03:02:48.333181 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 03:02:48.333201 | orchestrator | Monday 06 April 2026 03:02:44 +0000 (0:00:00.377) 0:09:46.670 ********** 2026-04-06 03:02:48.333212 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:48.333224 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:48.333234 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:48.333245 | orchestrator | 2026-04-06 03:02:48.333256 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 03:02:48.333267 | orchestrator | Monday 06 April 2026 03:02:45 +0000 (0:00:00.776) 0:09:47.446 ********** 2026-04-06 03:02:48.333279 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:48.333289 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:48.333299 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:48.333310 | orchestrator | 2026-04-06 03:02:48.333328 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 03:02:48.333338 | orchestrator | Monday 06 April 2026 03:02:46 +0000 (0:00:01.091) 0:09:48.537 ********** 2026-04-06 03:02:48.333349 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:02:48.333360 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:02:48.333372 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:02:48.333383 | orchestrator | 2026-04-06 03:02:48.333394 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 03:02:48.333404 | orchestrator | Monday 06 April 2026 03:02:46 +0000 (0:00:00.767) 0:09:49.305 ********** 2026-04-06 03:02:48.333415 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:48.333427 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:48.333438 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:48.333450 | orchestrator | 2026-04-06 03:02:48.333462 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 03:02:48.333473 | orchestrator | Monday 06 April 2026 03:02:47 +0000 (0:00:00.389) 0:09:49.694 ********** 2026-04-06 03:02:48.333484 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:48.333496 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:48.333507 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:48.333518 | orchestrator | 2026-04-06 03:02:48.333529 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 03:02:48.333539 | orchestrator | Monday 06 April 2026 03:02:47 +0000 (0:00:00.375) 0:09:50.070 ********** 2026-04-06 03:02:48.333550 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:02:48.333561 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:02:48.333573 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:02:48.333583 | orchestrator | 2026-04-06 03:02:48.333593 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 03:02:48.333614 | orchestrator | Monday 06 April 2026 03:02:48 +0000 (0:00:00.644) 0:09:50.714 ********** 2026-04-06 03:03:11.510479 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:03:11.510583 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:03:11.510597 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:03:11.510607 | orchestrator | 2026-04-06 03:03:11.510618 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 03:03:11.510628 | orchestrator | Monday 06 April 2026 03:02:49 +0000 (0:00:00.740) 0:09:51.455 ********** 2026-04-06 03:03:11.510637 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:03:11.510646 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:03:11.510655 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:03:11.510663 | orchestrator | 2026-04-06 03:03:11.510747 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 03:03:11.510757 | orchestrator | Monday 06 April 2026 03:02:49 +0000 (0:00:00.750) 0:09:52.205 ********** 2026-04-06 03:03:11.510765 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:11.510774 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:03:11.510783 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:03:11.510791 | orchestrator | 2026-04-06 03:03:11.510799 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 03:03:11.510807 | orchestrator | Monday 06 April 2026 03:02:50 +0000 (0:00:00.339) 0:09:52.545 ********** 2026-04-06 03:03:11.510834 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:11.510843 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:03:11.510851 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:03:11.510860 | orchestrator | 2026-04-06 03:03:11.510868 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 03:03:11.510876 | orchestrator | Monday 06 April 2026 03:02:50 +0000 (0:00:00.658) 0:09:53.203 ********** 2026-04-06 03:03:11.510884 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:03:11.510892 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:03:11.510900 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:03:11.510908 | orchestrator | 2026-04-06 03:03:11.510916 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 03:03:11.510924 | orchestrator | Monday 06 April 2026 03:02:51 +0000 (0:00:00.378) 0:09:53.582 ********** 2026-04-06 03:03:11.510932 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:03:11.510940 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:03:11.510947 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:03:11.510955 | orchestrator | 2026-04-06 03:03:11.510963 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 03:03:11.510971 | orchestrator | Monday 06 April 2026 03:02:51 +0000 (0:00:00.392) 0:09:53.975 ********** 2026-04-06 03:03:11.510979 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:03:11.510991 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:03:11.511003 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:03:11.511017 | orchestrator | 2026-04-06 03:03:11.511039 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 03:03:11.511051 | orchestrator | Monday 06 April 2026 03:02:51 +0000 (0:00:00.365) 0:09:54.340 ********** 2026-04-06 03:03:11.511065 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:11.511077 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:03:11.511090 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:03:11.511103 | orchestrator | 2026-04-06 03:03:11.511117 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 03:03:11.511130 | orchestrator | Monday 06 April 2026 03:02:52 +0000 (0:00:00.707) 0:09:55.047 ********** 2026-04-06 03:03:11.511142 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:11.511156 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:03:11.511170 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:03:11.511184 | orchestrator | 2026-04-06 03:03:11.511196 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 03:03:11.511210 | orchestrator | Monday 06 April 2026 03:02:52 +0000 (0:00:00.347) 0:09:55.395 ********** 2026-04-06 03:03:11.511224 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:11.511238 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:03:11.511251 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:03:11.511266 | orchestrator | 2026-04-06 03:03:11.511280 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 03:03:11.511294 | orchestrator | Monday 06 April 2026 03:02:53 +0000 (0:00:00.372) 0:09:55.767 ********** 2026-04-06 03:03:11.511309 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:03:11.511322 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:03:11.511337 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:03:11.511348 | orchestrator | 2026-04-06 03:03:11.511357 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 03:03:11.511382 | orchestrator | Monday 06 April 2026 03:02:53 +0000 (0:00:00.362) 0:09:56.129 ********** 2026-04-06 03:03:11.511390 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:03:11.511398 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:03:11.511406 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:03:11.511414 | orchestrator | 2026-04-06 03:03:11.511422 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-06 03:03:11.511430 | orchestrator | Monday 06 April 2026 03:02:54 +0000 (0:00:00.957) 0:09:57.087 ********** 2026-04-06 03:03:11.511438 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:03:11.511459 | orchestrator | 2026-04-06 03:03:11.511467 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-06 03:03:11.511475 | orchestrator | Monday 06 April 2026 03:02:55 +0000 (0:00:00.579) 0:09:57.666 ********** 2026-04-06 03:03:11.511483 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:03:11.511491 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 03:03:11.511499 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 03:03:11.511507 | orchestrator | 2026-04-06 03:03:11.511515 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-06 03:03:11.511523 | orchestrator | Monday 06 April 2026 03:02:57 +0000 (0:00:02.591) 0:10:00.258 ********** 2026-04-06 03:03:11.511531 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-06 03:03:11.511539 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 03:03:11.511547 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:03:11.511555 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-06 03:03:11.511563 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-06 03:03:11.511587 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:03:11.511595 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-06 03:03:11.511603 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-06 03:03:11.511611 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:03:11.511619 | orchestrator | 2026-04-06 03:03:11.511627 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-06 03:03:11.511634 | orchestrator | Monday 06 April 2026 03:02:59 +0000 (0:00:01.610) 0:10:01.868 ********** 2026-04-06 03:03:11.511647 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:11.511660 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:03:11.511698 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:03:11.511712 | orchestrator | 2026-04-06 03:03:11.511726 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-06 03:03:11.511738 | orchestrator | Monday 06 April 2026 03:02:59 +0000 (0:00:00.358) 0:10:02.227 ********** 2026-04-06 03:03:11.511751 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:03:11.511766 | orchestrator | 2026-04-06 03:03:11.511779 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-06 03:03:11.511791 | orchestrator | Monday 06 April 2026 03:03:00 +0000 (0:00:00.964) 0:10:03.192 ********** 2026-04-06 03:03:11.511817 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 03:03:11.511836 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 03:03:11.511851 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 03:03:11.511866 | orchestrator | 2026-04-06 03:03:11.511879 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-06 03:03:11.511895 | orchestrator | Monday 06 April 2026 03:03:01 +0000 (0:00:00.911) 0:10:04.104 ********** 2026-04-06 03:03:11.511908 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:03:11.511924 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-06 03:03:11.511940 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:03:11.511955 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:03:11.511969 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-06 03:03:11.511992 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-06 03:03:11.512000 | orchestrator | 2026-04-06 03:03:11.512008 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-06 03:03:11.512016 | orchestrator | Monday 06 April 2026 03:03:06 +0000 (0:00:04.755) 0:10:08.859 ********** 2026-04-06 03:03:11.512023 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:03:11.512031 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 03:03:11.512039 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:03:11.512047 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 03:03:11.512055 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:03:11.512062 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 03:03:11.512070 | orchestrator | 2026-04-06 03:03:11.512084 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-06 03:03:11.512092 | orchestrator | Monday 06 April 2026 03:03:08 +0000 (0:00:02.440) 0:10:11.299 ********** 2026-04-06 03:03:11.512100 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-06 03:03:11.512108 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:03:11.512116 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-06 03:03:11.512124 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:03:11.512131 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-06 03:03:11.512139 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:03:11.512147 | orchestrator | 2026-04-06 03:03:11.512155 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-06 03:03:11.512162 | orchestrator | Monday 06 April 2026 03:03:10 +0000 (0:00:01.650) 0:10:12.950 ********** 2026-04-06 03:03:11.512170 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-06 03:03:11.512178 | orchestrator | 2026-04-06 03:03:11.512186 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-06 03:03:11.512193 | orchestrator | Monday 06 April 2026 03:03:10 +0000 (0:00:00.251) 0:10:13.201 ********** 2026-04-06 03:03:11.512201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 03:03:11.512209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 03:03:11.512224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 03:03:56.839177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 03:03:56.839290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 03:03:56.839301 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:56.839310 | orchestrator | 2026-04-06 03:03:56.839318 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-06 03:03:56.839327 | orchestrator | Monday 06 April 2026 03:03:11 +0000 (0:00:00.700) 0:10:13.902 ********** 2026-04-06 03:03:56.839334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 03:03:56.839341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 03:03:56.839348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 03:03:56.839382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 03:03:56.839409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 03:03:56.839417 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:56.839424 | orchestrator | 2026-04-06 03:03:56.839432 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-06 03:03:56.839439 | orchestrator | Monday 06 April 2026 03:03:12 +0000 (0:00:00.653) 0:10:14.555 ********** 2026-04-06 03:03:56.839445 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 03:03:56.839454 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 03:03:56.839461 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 03:03:56.839468 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 03:03:56.839475 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 03:03:56.839482 | orchestrator | 2026-04-06 03:03:56.839489 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-06 03:03:56.839496 | orchestrator | Monday 06 April 2026 03:03:42 +0000 (0:00:30.810) 0:10:45.366 ********** 2026-04-06 03:03:56.839502 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:56.839509 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:03:56.839516 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:03:56.839522 | orchestrator | 2026-04-06 03:03:56.839529 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-06 03:03:56.839536 | orchestrator | Monday 06 April 2026 03:03:43 +0000 (0:00:00.362) 0:10:45.729 ********** 2026-04-06 03:03:56.839543 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:56.839549 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:03:56.839556 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:03:56.839563 | orchestrator | 2026-04-06 03:03:56.839569 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-06 03:03:56.839588 | orchestrator | Monday 06 April 2026 03:03:43 +0000 (0:00:00.393) 0:10:46.123 ********** 2026-04-06 03:03:56.839596 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:03:56.839603 | orchestrator | 2026-04-06 03:03:56.839610 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-06 03:03:56.839616 | orchestrator | Monday 06 April 2026 03:03:44 +0000 (0:00:01.005) 0:10:47.129 ********** 2026-04-06 03:03:56.839623 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:03:56.839630 | orchestrator | 2026-04-06 03:03:56.839637 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-06 03:03:56.839643 | orchestrator | Monday 06 April 2026 03:03:45 +0000 (0:00:00.888) 0:10:48.017 ********** 2026-04-06 03:03:56.839650 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:03:56.839657 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:03:56.839664 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:03:56.839671 | orchestrator | 2026-04-06 03:03:56.839678 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-06 03:03:56.839684 | orchestrator | Monday 06 April 2026 03:03:46 +0000 (0:00:01.331) 0:10:49.349 ********** 2026-04-06 03:03:56.839709 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:03:56.839717 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:03:56.839731 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:03:56.839740 | orchestrator | 2026-04-06 03:03:56.839748 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-06 03:03:56.839756 | orchestrator | Monday 06 April 2026 03:03:48 +0000 (0:00:01.210) 0:10:50.559 ********** 2026-04-06 03:03:56.839764 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:03:56.839786 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:03:56.839794 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:03:56.839810 | orchestrator | 2026-04-06 03:03:56.839819 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-06 03:03:56.839835 | orchestrator | Monday 06 April 2026 03:03:50 +0000 (0:00:01.934) 0:10:52.494 ********** 2026-04-06 03:03:56.839843 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 03:03:56.839852 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 03:03:56.839859 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 03:03:56.839867 | orchestrator | 2026-04-06 03:03:56.839876 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 03:03:56.839883 | orchestrator | Monday 06 April 2026 03:03:52 +0000 (0:00:02.887) 0:10:55.382 ********** 2026-04-06 03:03:56.839891 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:56.839899 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:03:56.839907 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:03:56.839915 | orchestrator | 2026-04-06 03:03:56.839923 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-06 03:03:56.839934 | orchestrator | Monday 06 April 2026 03:03:53 +0000 (0:00:00.414) 0:10:55.796 ********** 2026-04-06 03:03:56.839945 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:03:56.839956 | orchestrator | 2026-04-06 03:03:56.839967 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-06 03:03:56.839978 | orchestrator | Monday 06 April 2026 03:03:54 +0000 (0:00:00.948) 0:10:56.745 ********** 2026-04-06 03:03:56.839989 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:03:56.840000 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:03:56.840011 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:03:56.840021 | orchestrator | 2026-04-06 03:03:56.840032 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-06 03:03:56.840043 | orchestrator | Monday 06 April 2026 03:03:54 +0000 (0:00:00.385) 0:10:57.131 ********** 2026-04-06 03:03:56.840054 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:56.840064 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:03:56.840071 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:03:56.840078 | orchestrator | 2026-04-06 03:03:56.840085 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-06 03:03:56.840092 | orchestrator | Monday 06 April 2026 03:03:55 +0000 (0:00:00.378) 0:10:57.509 ********** 2026-04-06 03:03:56.840099 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 03:03:56.840106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 03:03:56.840113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 03:03:56.840119 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:03:56.840126 | orchestrator | 2026-04-06 03:03:56.840133 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-06 03:03:56.840140 | orchestrator | Monday 06 April 2026 03:03:56 +0000 (0:00:01.095) 0:10:58.605 ********** 2026-04-06 03:03:56.840147 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:03:56.840154 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:03:56.840161 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:03:56.840168 | orchestrator | 2026-04-06 03:03:56.840183 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:03:56.840190 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-06 03:03:56.840198 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-06 03:03:56.840210 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-06 03:03:56.840217 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-06 03:03:56.840224 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-06 03:03:56.840230 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-06 03:03:56.840237 | orchestrator | 2026-04-06 03:03:56.840244 | orchestrator | 2026-04-06 03:03:56.840251 | orchestrator | 2026-04-06 03:03:56.840258 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:03:56.840264 | orchestrator | Monday 06 April 2026 03:03:56 +0000 (0:00:00.609) 0:10:59.214 ********** 2026-04-06 03:03:56.840271 | orchestrator | =============================================================================== 2026-04-06 03:03:56.840278 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 62.70s 2026-04-06 03:03:56.840285 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.29s 2026-04-06 03:03:56.840291 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.81s 2026-04-06 03:03:56.840298 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.63s 2026-04-06 03:03:56.840305 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.58s 2026-04-06 03:03:56.840317 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.89s 2026-04-06 03:03:57.372245 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.79s 2026-04-06 03:03:57.372341 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.40s 2026-04-06 03:03:57.372351 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.13s 2026-04-06 03:03:57.372360 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.88s 2026-04-06 03:03:57.372368 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.60s 2026-04-06 03:03:57.372375 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.15s 2026-04-06 03:03:57.372382 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.76s 2026-04-06 03:03:57.372389 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.38s 2026-04-06 03:03:57.372396 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.92s 2026-04-06 03:03:57.372403 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.85s 2026-04-06 03:03:57.372411 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.67s 2026-04-06 03:03:57.372418 | orchestrator | ceph-mds : Create mds keyring ------------------------------------------- 3.62s 2026-04-06 03:03:57.372425 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.57s 2026-04-06 03:03:57.372432 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.24s 2026-04-06 03:04:00.174597 | orchestrator | 2026-04-06 03:04:00 | INFO  | Task a1bdd413-c151-4b19-b921-353d2c7f3603 (ceph-pools) was prepared for execution. 2026-04-06 03:04:00.174666 | orchestrator | 2026-04-06 03:04:00 | INFO  | It takes a moment until task a1bdd413-c151-4b19-b921-353d2c7f3603 (ceph-pools) has been started and output is visible here. 2026-04-06 03:04:15.728383 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-06 03:04:15.728487 | orchestrator | 2.16.14 2026-04-06 03:04:15.728499 | orchestrator | 2026-04-06 03:04:15.728506 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-06 03:04:15.728513 | orchestrator | 2026-04-06 03:04:15.728520 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 03:04:15.728527 | orchestrator | Monday 06 April 2026 03:04:05 +0000 (0:00:00.661) 0:00:00.661 ********** 2026-04-06 03:04:15.728534 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:04:15.728541 | orchestrator | 2026-04-06 03:04:15.728547 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 03:04:15.728553 | orchestrator | Monday 06 April 2026 03:04:06 +0000 (0:00:00.716) 0:00:01.378 ********** 2026-04-06 03:04:15.728559 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:15.728566 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:15.728572 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:15.728578 | orchestrator | 2026-04-06 03:04:15.728585 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 03:04:15.728591 | orchestrator | Monday 06 April 2026 03:04:06 +0000 (0:00:00.662) 0:00:02.040 ********** 2026-04-06 03:04:15.728598 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:15.728604 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:15.728610 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:15.728616 | orchestrator | 2026-04-06 03:04:15.728622 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 03:04:15.728629 | orchestrator | Monday 06 April 2026 03:04:06 +0000 (0:00:00.324) 0:00:02.365 ********** 2026-04-06 03:04:15.728635 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:15.728641 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:15.728647 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:15.728653 | orchestrator | 2026-04-06 03:04:15.728659 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 03:04:15.728678 | orchestrator | Monday 06 April 2026 03:04:07 +0000 (0:00:00.940) 0:00:03.305 ********** 2026-04-06 03:04:15.728686 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:15.728692 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:15.728698 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:15.728753 | orchestrator | 2026-04-06 03:04:15.728760 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 03:04:15.728767 | orchestrator | Monday 06 April 2026 03:04:08 +0000 (0:00:00.366) 0:00:03.672 ********** 2026-04-06 03:04:15.728773 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:15.728779 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:15.728785 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:15.728792 | orchestrator | 2026-04-06 03:04:15.728798 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 03:04:15.728804 | orchestrator | Monday 06 April 2026 03:04:08 +0000 (0:00:00.333) 0:00:04.006 ********** 2026-04-06 03:04:15.728810 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:15.728816 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:15.728822 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:15.728828 | orchestrator | 2026-04-06 03:04:15.728834 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 03:04:15.728841 | orchestrator | Monday 06 April 2026 03:04:09 +0000 (0:00:00.373) 0:00:04.379 ********** 2026-04-06 03:04:15.728847 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:15.728854 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:15.728861 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:15.728867 | orchestrator | 2026-04-06 03:04:15.728873 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 03:04:15.728879 | orchestrator | Monday 06 April 2026 03:04:09 +0000 (0:00:00.595) 0:00:04.975 ********** 2026-04-06 03:04:15.728905 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:15.728912 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:15.728918 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:15.728924 | orchestrator | 2026-04-06 03:04:15.728931 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 03:04:15.728937 | orchestrator | Monday 06 April 2026 03:04:09 +0000 (0:00:00.330) 0:00:05.306 ********** 2026-04-06 03:04:15.728943 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 03:04:15.728950 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 03:04:15.728956 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 03:04:15.728962 | orchestrator | 2026-04-06 03:04:15.728969 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 03:04:15.728976 | orchestrator | Monday 06 April 2026 03:04:10 +0000 (0:00:00.716) 0:00:06.022 ********** 2026-04-06 03:04:15.728982 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:15.728989 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:15.728996 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:15.729003 | orchestrator | 2026-04-06 03:04:15.729009 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 03:04:15.729016 | orchestrator | Monday 06 April 2026 03:04:11 +0000 (0:00:00.489) 0:00:06.512 ********** 2026-04-06 03:04:15.729023 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 03:04:15.729029 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 03:04:15.729036 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 03:04:15.729042 | orchestrator | 2026-04-06 03:04:15.729048 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 03:04:15.729054 | orchestrator | Monday 06 April 2026 03:04:13 +0000 (0:00:02.271) 0:00:08.784 ********** 2026-04-06 03:04:15.729061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 03:04:15.729068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 03:04:15.729074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 03:04:15.729080 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:15.729086 | orchestrator | 2026-04-06 03:04:15.729108 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 03:04:15.729114 | orchestrator | Monday 06 April 2026 03:04:14 +0000 (0:00:00.713) 0:00:09.497 ********** 2026-04-06 03:04:15.729124 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 03:04:15.729133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 03:04:15.729140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 03:04:15.729146 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:15.729153 | orchestrator | 2026-04-06 03:04:15.729159 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 03:04:15.729165 | orchestrator | Monday 06 April 2026 03:04:15 +0000 (0:00:01.162) 0:00:10.659 ********** 2026-04-06 03:04:15.729179 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:15.729193 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:15.729201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:15.729207 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:15.729214 | orchestrator | 2026-04-06 03:04:15.729220 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 03:04:15.729227 | orchestrator | Monday 06 April 2026 03:04:15 +0000 (0:00:00.190) 0:00:10.850 ********** 2026-04-06 03:04:15.729235 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7ab3f7ebb0fe', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 03:04:12.051669', 'end': '2026-04-06 03:04:12.097230', 'delta': '0:00:00.045561', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7ab3f7ebb0fe'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 03:04:15.729245 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '46d5ea15fe96', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 03:04:12.657239', 'end': '2026-04-06 03:04:12.709281', 'delta': '0:00:00.052042', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['46d5ea15fe96'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 03:04:15.729257 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a87eea657fd7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 03:04:13.219736', 'end': '2026-04-06 03:04:13.267234', 'delta': '0:00:00.047498', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a87eea657fd7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 03:04:23.413978 | orchestrator | 2026-04-06 03:04:23.414145 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 03:04:23.414165 | orchestrator | Monday 06 April 2026 03:04:15 +0000 (0:00:00.236) 0:00:11.086 ********** 2026-04-06 03:04:23.414176 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:23.414209 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:23.414220 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:23.414230 | orchestrator | 2026-04-06 03:04:23.414241 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 03:04:23.414250 | orchestrator | Monday 06 April 2026 03:04:16 +0000 (0:00:00.518) 0:00:11.605 ********** 2026-04-06 03:04:23.414261 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-06 03:04:23.414271 | orchestrator | 2026-04-06 03:04:23.414280 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 03:04:23.414304 | orchestrator | Monday 06 April 2026 03:04:18 +0000 (0:00:01.767) 0:00:13.373 ********** 2026-04-06 03:04:23.414314 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.414324 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.414334 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.414343 | orchestrator | 2026-04-06 03:04:23.414353 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 03:04:23.414362 | orchestrator | Monday 06 April 2026 03:04:18 +0000 (0:00:00.338) 0:00:13.711 ********** 2026-04-06 03:04:23.414372 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.414382 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.414391 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.414401 | orchestrator | 2026-04-06 03:04:23.414410 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 03:04:23.414420 | orchestrator | Monday 06 April 2026 03:04:19 +0000 (0:00:01.022) 0:00:14.734 ********** 2026-04-06 03:04:23.414430 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.414439 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.414449 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.414458 | orchestrator | 2026-04-06 03:04:23.414468 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 03:04:23.414478 | orchestrator | Monday 06 April 2026 03:04:19 +0000 (0:00:00.336) 0:00:15.070 ********** 2026-04-06 03:04:23.414488 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:23.414498 | orchestrator | 2026-04-06 03:04:23.414507 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 03:04:23.414519 | orchestrator | Monday 06 April 2026 03:04:19 +0000 (0:00:00.156) 0:00:15.227 ********** 2026-04-06 03:04:23.414530 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.414541 | orchestrator | 2026-04-06 03:04:23.414555 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 03:04:23.414574 | orchestrator | Monday 06 April 2026 03:04:20 +0000 (0:00:00.277) 0:00:15.504 ********** 2026-04-06 03:04:23.414591 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.414607 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.414624 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.414642 | orchestrator | 2026-04-06 03:04:23.414689 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 03:04:23.414734 | orchestrator | Monday 06 April 2026 03:04:20 +0000 (0:00:00.338) 0:00:15.843 ********** 2026-04-06 03:04:23.414753 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.414771 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.414787 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.414805 | orchestrator | 2026-04-06 03:04:23.414817 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 03:04:23.414828 | orchestrator | Monday 06 April 2026 03:04:20 +0000 (0:00:00.358) 0:00:16.202 ********** 2026-04-06 03:04:23.414839 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.414850 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.414862 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.414873 | orchestrator | 2026-04-06 03:04:23.414883 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 03:04:23.414893 | orchestrator | Monday 06 April 2026 03:04:21 +0000 (0:00:00.599) 0:00:16.801 ********** 2026-04-06 03:04:23.414902 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.414925 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.414935 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.414945 | orchestrator | 2026-04-06 03:04:23.414955 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 03:04:23.414965 | orchestrator | Monday 06 April 2026 03:04:21 +0000 (0:00:00.358) 0:00:17.160 ********** 2026-04-06 03:04:23.414975 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.414985 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.414994 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.415004 | orchestrator | 2026-04-06 03:04:23.415014 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 03:04:23.415023 | orchestrator | Monday 06 April 2026 03:04:22 +0000 (0:00:00.380) 0:00:17.540 ********** 2026-04-06 03:04:23.415033 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.415043 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.415054 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.415070 | orchestrator | 2026-04-06 03:04:23.415085 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 03:04:23.415102 | orchestrator | Monday 06 April 2026 03:04:22 +0000 (0:00:00.619) 0:00:18.159 ********** 2026-04-06 03:04:23.415117 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.415132 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.415147 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.415163 | orchestrator | 2026-04-06 03:04:23.415179 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 03:04:23.415196 | orchestrator | Monday 06 April 2026 03:04:23 +0000 (0:00:00.374) 0:00:18.534 ********** 2026-04-06 03:04:23.415241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.415268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.415281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.415294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.415304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.415324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.415334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.415344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.415354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.415372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.485243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.485358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.485373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.485401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.485424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.485435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.485452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.485464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.485473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.485484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.485493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.485507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.634391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.634490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.634506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.634540 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:23.634557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.634590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.634609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.634620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.634640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.634652 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:23.634663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.634674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.634685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.634769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.939047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.939175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.939217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.939229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.939240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.939250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-06 03:04:23.939295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.939320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.939332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.939343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.939355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-06 03:04:23.939367 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:23.939379 | orchestrator | 2026-04-06 03:04:23.939390 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 03:04:23.939401 | orchestrator | Monday 06 April 2026 03:04:23 +0000 (0:00:00.657) 0:00:19.192 ********** 2026-04-06 03:04:23.939420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.073903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074074 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074124 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074138 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074144 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074163 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.074202 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177660 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177872 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177894 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.177916 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366360 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366437 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:04:24.366446 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366453 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366483 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:04:24.366501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366512 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366518 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366524 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366529 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:24.366547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:25.855847 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:25.855984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:25.856003 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:25.856043 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:25.856123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:25.856141 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:25.856154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:25.856167 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-06-01-39-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-06 03:04:25.856193 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:04:25.856206 | orchestrator | 2026-04-06 03:04:25.856219 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 03:04:25.856231 | orchestrator | Monday 06 April 2026 03:04:24 +0000 (0:00:00.724) 0:00:19.917 ********** 2026-04-06 03:04:25.856245 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:25.856259 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:25.856272 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:25.856285 | orchestrator | 2026-04-06 03:04:25.856297 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 03:04:25.856310 | orchestrator | Monday 06 April 2026 03:04:25 +0000 (0:00:00.933) 0:00:20.850 ********** 2026-04-06 03:04:25.856324 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:04:25.856336 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:04:25.856348 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:04:25.856361 | orchestrator | 2026-04-06 03:04:25.856382 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 03:05:24.299426 | orchestrator | Monday 06 April 2026 03:04:25 +0000 (0:00:00.365) 0:00:21.215 ********** 2026-04-06 03:05:24.299516 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:05:24.299525 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:05:24.299531 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:05:24.299537 | orchestrator | 2026-04-06 03:05:24.299543 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 03:05:24.299560 | orchestrator | Monday 06 April 2026 03:04:26 +0000 (0:00:00.696) 0:00:21.912 ********** 2026-04-06 03:05:24.299566 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.299573 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:05:24.299589 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:05:24.299600 | orchestrator | 2026-04-06 03:05:24.299606 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 03:05:24.299611 | orchestrator | Monday 06 April 2026 03:04:26 +0000 (0:00:00.329) 0:00:22.242 ********** 2026-04-06 03:05:24.299617 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.299622 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:05:24.299627 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:05:24.299632 | orchestrator | 2026-04-06 03:05:24.299637 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 03:05:24.299642 | orchestrator | Monday 06 April 2026 03:04:27 +0000 (0:00:00.779) 0:00:23.021 ********** 2026-04-06 03:05:24.299648 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.299653 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:05:24.299658 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:05:24.299663 | orchestrator | 2026-04-06 03:05:24.299668 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 03:05:24.299674 | orchestrator | Monday 06 April 2026 03:04:27 +0000 (0:00:00.335) 0:00:23.356 ********** 2026-04-06 03:05:24.299679 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-06 03:05:24.299684 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-06 03:05:24.299690 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-06 03:05:24.299695 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-06 03:05:24.299700 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-06 03:05:24.299705 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-06 03:05:24.299710 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-06 03:05:24.299715 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-06 03:05:24.299738 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-06 03:05:24.299775 | orchestrator | 2026-04-06 03:05:24.299782 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 03:05:24.299787 | orchestrator | Monday 06 April 2026 03:04:29 +0000 (0:00:01.198) 0:00:24.554 ********** 2026-04-06 03:05:24.299793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 03:05:24.299798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 03:05:24.299803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 03:05:24.299808 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.299822 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-06 03:05:24.299834 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-06 03:05:24.299839 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-06 03:05:24.299844 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:05:24.299849 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-06 03:05:24.299854 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-06 03:05:24.299859 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-06 03:05:24.299864 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:05:24.299869 | orchestrator | 2026-04-06 03:05:24.299875 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 03:05:24.299880 | orchestrator | Monday 06 April 2026 03:04:29 +0000 (0:00:00.430) 0:00:24.985 ********** 2026-04-06 03:05:24.299886 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:05:24.299891 | orchestrator | 2026-04-06 03:05:24.299897 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 03:05:24.299904 | orchestrator | Monday 06 April 2026 03:04:30 +0000 (0:00:00.837) 0:00:25.822 ********** 2026-04-06 03:05:24.299909 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.299914 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:05:24.299920 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:05:24.299925 | orchestrator | 2026-04-06 03:05:24.299930 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 03:05:24.299935 | orchestrator | Monday 06 April 2026 03:04:30 +0000 (0:00:00.374) 0:00:26.196 ********** 2026-04-06 03:05:24.299940 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.299946 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:05:24.299951 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:05:24.299956 | orchestrator | 2026-04-06 03:05:24.299961 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 03:05:24.299966 | orchestrator | Monday 06 April 2026 03:04:31 +0000 (0:00:00.345) 0:00:26.542 ********** 2026-04-06 03:05:24.299971 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.299976 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:05:24.299982 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:05:24.299988 | orchestrator | 2026-04-06 03:05:24.299994 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 03:05:24.300000 | orchestrator | Monday 06 April 2026 03:04:31 +0000 (0:00:00.616) 0:00:27.158 ********** 2026-04-06 03:05:24.300007 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:05:24.300013 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:05:24.300019 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:05:24.300025 | orchestrator | 2026-04-06 03:05:24.300031 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 03:05:24.300049 | orchestrator | Monday 06 April 2026 03:04:32 +0000 (0:00:00.457) 0:00:27.615 ********** 2026-04-06 03:05:24.300055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 03:05:24.300061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 03:05:24.300067 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 03:05:24.300080 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.300086 | orchestrator | 2026-04-06 03:05:24.300096 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 03:05:24.300103 | orchestrator | Monday 06 April 2026 03:04:32 +0000 (0:00:00.420) 0:00:28.036 ********** 2026-04-06 03:05:24.300109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 03:05:24.300114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 03:05:24.300121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 03:05:24.300126 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.300133 | orchestrator | 2026-04-06 03:05:24.300139 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 03:05:24.300145 | orchestrator | Monday 06 April 2026 03:04:33 +0000 (0:00:00.412) 0:00:28.449 ********** 2026-04-06 03:05:24.300151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 03:05:24.300157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 03:05:24.300163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 03:05:24.300169 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.300175 | orchestrator | 2026-04-06 03:05:24.300180 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 03:05:24.300187 | orchestrator | Monday 06 April 2026 03:04:33 +0000 (0:00:00.414) 0:00:28.863 ********** 2026-04-06 03:05:24.300193 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:05:24.300199 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:05:24.300205 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:05:24.300210 | orchestrator | 2026-04-06 03:05:24.300217 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 03:05:24.300223 | orchestrator | Monday 06 April 2026 03:04:33 +0000 (0:00:00.375) 0:00:29.239 ********** 2026-04-06 03:05:24.300229 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-06 03:05:24.300235 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 03:05:24.300241 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-06 03:05:24.300247 | orchestrator | 2026-04-06 03:05:24.300253 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 03:05:24.300259 | orchestrator | Monday 06 April 2026 03:04:34 +0000 (0:00:00.857) 0:00:30.097 ********** 2026-04-06 03:05:24.300265 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 03:05:24.300272 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 03:05:24.300278 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 03:05:24.300284 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-06 03:05:24.300290 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 03:05:24.300296 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 03:05:24.300302 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 03:05:24.300308 | orchestrator | 2026-04-06 03:05:24.300314 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 03:05:24.300320 | orchestrator | Monday 06 April 2026 03:04:35 +0000 (0:00:00.955) 0:00:31.052 ********** 2026-04-06 03:05:24.300326 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 03:05:24.300332 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 03:05:24.300339 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 03:05:24.300344 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-06 03:05:24.300350 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 03:05:24.300361 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 03:05:24.300367 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 03:05:24.300373 | orchestrator | 2026-04-06 03:05:24.300379 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-06 03:05:24.300385 | orchestrator | Monday 06 April 2026 03:04:37 +0000 (0:00:01.860) 0:00:32.913 ********** 2026-04-06 03:05:24.300391 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:05:24.300397 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:05:24.300403 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-06 03:05:24.300409 | orchestrator | 2026-04-06 03:05:24.300415 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-06 03:05:24.300421 | orchestrator | Monday 06 April 2026 03:04:37 +0000 (0:00:00.407) 0:00:33.320 ********** 2026-04-06 03:05:24.300430 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-06 03:05:24.300441 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-06 03:06:18.816254 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-06 03:06:18.816338 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-06 03:06:18.816346 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-06 03:06:18.816351 | orchestrator | 2026-04-06 03:06:18.816357 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-06 03:06:18.816363 | orchestrator | Monday 06 April 2026 03:05:24 +0000 (0:00:46.332) 0:01:19.652 ********** 2026-04-06 03:06:18.816367 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816373 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816377 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816381 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816386 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816390 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816396 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-06 03:06:18.816400 | orchestrator | 2026-04-06 03:06:18.816404 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-06 03:06:18.816409 | orchestrator | Monday 06 April 2026 03:05:49 +0000 (0:00:24.977) 0:01:44.630 ********** 2026-04-06 03:06:18.816413 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816418 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816438 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816442 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816447 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816461 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816466 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 03:06:18.816476 | orchestrator | 2026-04-06 03:06:18.816481 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-06 03:06:18.816485 | orchestrator | Monday 06 April 2026 03:06:01 +0000 (0:00:11.975) 0:01:56.605 ********** 2026-04-06 03:06:18.816490 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816494 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-06 03:06:18.816498 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 03:06:18.816503 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816507 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-06 03:06:18.816512 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 03:06:18.816516 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816520 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-06 03:06:18.816525 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 03:06:18.816529 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816533 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-06 03:06:18.816538 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 03:06:18.816542 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816546 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-06 03:06:18.816551 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 03:06:18.816555 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 03:06:18.816559 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-06 03:06:18.816564 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 03:06:18.816579 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-06 03:06:18.816584 | orchestrator | 2026-04-06 03:06:18.816589 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:06:18.816597 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-06 03:06:18.816603 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-06 03:06:18.816609 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-06 03:06:18.816614 | orchestrator | 2026-04-06 03:06:18.816618 | orchestrator | 2026-04-06 03:06:18.816622 | orchestrator | 2026-04-06 03:06:18.816627 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:06:18.816631 | orchestrator | Monday 06 April 2026 03:06:18 +0000 (0:00:17.135) 0:02:13.741 ********** 2026-04-06 03:06:18.816636 | orchestrator | =============================================================================== 2026-04-06 03:06:18.816640 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.33s 2026-04-06 03:06:18.816656 | orchestrator | generate keys ---------------------------------------------------------- 24.98s 2026-04-06 03:06:18.816660 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.14s 2026-04-06 03:06:18.816665 | orchestrator | get keys from monitors ------------------------------------------------- 11.98s 2026-04-06 03:06:18.816675 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.27s 2026-04-06 03:06:18.816679 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.86s 2026-04-06 03:06:18.816684 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.77s 2026-04-06 03:06:18.816689 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.20s 2026-04-06 03:06:18.816693 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.16s 2026-04-06 03:06:18.816697 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 1.02s 2026-04-06 03:06:18.816702 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.96s 2026-04-06 03:06:18.816706 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.94s 2026-04-06 03:06:18.816711 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.93s 2026-04-06 03:06:18.816715 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.86s 2026-04-06 03:06:18.816719 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.84s 2026-04-06 03:06:18.816724 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.78s 2026-04-06 03:06:18.816728 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.72s 2026-04-06 03:06:18.816732 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.72s 2026-04-06 03:06:18.816737 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.72s 2026-04-06 03:06:18.816741 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.71s 2026-04-06 03:06:21.392737 | orchestrator | 2026-04-06 03:06:21 | INFO  | Task 1ca021b4-e2a0-47ac-8c12-aa29c82c596e (copy-ceph-keys) was prepared for execution. 2026-04-06 03:06:21.392876 | orchestrator | 2026-04-06 03:06:21 | INFO  | It takes a moment until task 1ca021b4-e2a0-47ac-8c12-aa29c82c596e (copy-ceph-keys) has been started and output is visible here. 2026-04-06 03:07:01.875080 | orchestrator | 2026-04-06 03:07:01.875226 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-06 03:07:01.875251 | orchestrator | 2026-04-06 03:07:01.875269 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-06 03:07:01.875288 | orchestrator | Monday 06 April 2026 03:06:26 +0000 (0:00:00.187) 0:00:00.187 ********** 2026-04-06 03:07:01.875305 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-06 03:07:01.875325 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.875343 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.875394 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-06 03:07:01.875413 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.875431 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-06 03:07:01.875450 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-06 03:07:01.875467 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-06 03:07:01.875484 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-06 03:07:01.875533 | orchestrator | 2026-04-06 03:07:01.875550 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-06 03:07:01.875567 | orchestrator | Monday 06 April 2026 03:06:30 +0000 (0:00:04.259) 0:00:04.447 ********** 2026-04-06 03:07:01.875634 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-06 03:07:01.875651 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.875687 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.875705 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-06 03:07:01.875722 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.875740 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-06 03:07:01.875757 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-06 03:07:01.875775 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-06 03:07:01.875789 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-06 03:07:01.875798 | orchestrator | 2026-04-06 03:07:01.875868 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-06 03:07:01.875885 | orchestrator | Monday 06 April 2026 03:06:34 +0000 (0:00:04.394) 0:00:08.841 ********** 2026-04-06 03:07:01.875901 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-06 03:07:01.875917 | orchestrator | 2026-04-06 03:07:01.875933 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-06 03:07:01.875950 | orchestrator | Monday 06 April 2026 03:06:35 +0000 (0:00:01.042) 0:00:09.884 ********** 2026-04-06 03:07:01.875966 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-06 03:07:01.875983 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.875997 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.876008 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-06 03:07:01.876018 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.876028 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-06 03:07:01.876037 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-06 03:07:01.876047 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-06 03:07:01.876057 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-06 03:07:01.876067 | orchestrator | 2026-04-06 03:07:01.876076 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-06 03:07:01.876086 | orchestrator | Monday 06 April 2026 03:06:50 +0000 (0:00:14.878) 0:00:24.763 ********** 2026-04-06 03:07:01.876096 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-06 03:07:01.876106 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-06 03:07:01.876116 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-06 03:07:01.876126 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-06 03:07:01.876160 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-06 03:07:01.876170 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-06 03:07:01.876195 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-06 03:07:01.876205 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-06 03:07:01.876215 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-06 03:07:01.876224 | orchestrator | 2026-04-06 03:07:01.876234 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-06 03:07:01.876244 | orchestrator | Monday 06 April 2026 03:06:54 +0000 (0:00:03.412) 0:00:28.175 ********** 2026-04-06 03:07:01.876254 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-06 03:07:01.876265 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.876275 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.876284 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-06 03:07:01.876294 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-06 03:07:01.876304 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-06 03:07:01.876314 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-06 03:07:01.876330 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-06 03:07:01.876345 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-06 03:07:01.876361 | orchestrator | 2026-04-06 03:07:01.876377 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:07:01.876394 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:07:01.876411 | orchestrator | 2026-04-06 03:07:01.876428 | orchestrator | 2026-04-06 03:07:01.876453 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:07:01.876471 | orchestrator | Monday 06 April 2026 03:07:01 +0000 (0:00:07.428) 0:00:35.604 ********** 2026-04-06 03:07:01.876490 | orchestrator | =============================================================================== 2026-04-06 03:07:01.876506 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.88s 2026-04-06 03:07:01.876524 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.43s 2026-04-06 03:07:01.876542 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.39s 2026-04-06 03:07:01.876558 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.26s 2026-04-06 03:07:01.876574 | orchestrator | Check if target directories exist --------------------------------------- 3.41s 2026-04-06 03:07:01.876590 | orchestrator | Create share directory -------------------------------------------------- 1.04s 2026-04-06 03:07:14.576647 | orchestrator | 2026-04-06 03:07:14 | INFO  | Task 5553a214-23e1-4b88-a815-681698fbf82d (cephclient) was prepared for execution. 2026-04-06 03:07:14.576781 | orchestrator | 2026-04-06 03:07:14 | INFO  | It takes a moment until task 5553a214-23e1-4b88-a815-681698fbf82d (cephclient) has been started and output is visible here. 2026-04-06 03:08:18.370183 | orchestrator | 2026-04-06 03:08:18.370322 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-06 03:08:18.370352 | orchestrator | 2026-04-06 03:08:18.370374 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-06 03:08:18.370386 | orchestrator | Monday 06 April 2026 03:07:19 +0000 (0:00:00.289) 0:00:00.289 ********** 2026-04-06 03:08:18.370398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-06 03:08:18.370411 | orchestrator | 2026-04-06 03:08:18.370422 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-06 03:08:18.370460 | orchestrator | Monday 06 April 2026 03:07:19 +0000 (0:00:00.270) 0:00:00.560 ********** 2026-04-06 03:08:18.370473 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-06 03:08:18.370485 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-06 03:08:18.370496 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-06 03:08:18.370508 | orchestrator | 2026-04-06 03:08:18.370519 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-06 03:08:18.370530 | orchestrator | Monday 06 April 2026 03:07:21 +0000 (0:00:01.373) 0:00:01.934 ********** 2026-04-06 03:08:18.370542 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-06 03:08:18.370553 | orchestrator | 2026-04-06 03:08:18.370564 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-06 03:08:18.370575 | orchestrator | Monday 06 April 2026 03:07:22 +0000 (0:00:01.582) 0:00:03.516 ********** 2026-04-06 03:08:18.370586 | orchestrator | changed: [testbed-manager] 2026-04-06 03:08:18.370597 | orchestrator | 2026-04-06 03:08:18.370608 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-06 03:08:18.370621 | orchestrator | Monday 06 April 2026 03:07:23 +0000 (0:00:01.027) 0:00:04.544 ********** 2026-04-06 03:08:18.370640 | orchestrator | changed: [testbed-manager] 2026-04-06 03:08:18.370657 | orchestrator | 2026-04-06 03:08:18.370675 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-06 03:08:18.370693 | orchestrator | Monday 06 April 2026 03:07:24 +0000 (0:00:00.961) 0:00:05.505 ********** 2026-04-06 03:08:18.370711 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-06 03:08:18.370729 | orchestrator | ok: [testbed-manager] 2026-04-06 03:08:18.370748 | orchestrator | 2026-04-06 03:08:18.370769 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-06 03:08:18.370787 | orchestrator | Monday 06 April 2026 03:08:07 +0000 (0:00:42.847) 0:00:48.353 ********** 2026-04-06 03:08:18.370807 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-06 03:08:18.370822 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-06 03:08:18.370835 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-06 03:08:18.370848 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-06 03:08:18.370927 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-06 03:08:18.370941 | orchestrator | 2026-04-06 03:08:18.370954 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-06 03:08:18.370967 | orchestrator | Monday 06 April 2026 03:08:11 +0000 (0:00:04.384) 0:00:52.738 ********** 2026-04-06 03:08:18.370980 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-06 03:08:18.370994 | orchestrator | 2026-04-06 03:08:18.371008 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-06 03:08:18.371020 | orchestrator | Monday 06 April 2026 03:08:12 +0000 (0:00:00.511) 0:00:53.249 ********** 2026-04-06 03:08:18.371031 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:08:18.371042 | orchestrator | 2026-04-06 03:08:18.371053 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-06 03:08:18.371064 | orchestrator | Monday 06 April 2026 03:08:12 +0000 (0:00:00.147) 0:00:53.397 ********** 2026-04-06 03:08:18.371076 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:08:18.371087 | orchestrator | 2026-04-06 03:08:18.371098 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-06 03:08:18.371109 | orchestrator | Monday 06 April 2026 03:08:13 +0000 (0:00:00.544) 0:00:53.941 ********** 2026-04-06 03:08:18.371120 | orchestrator | changed: [testbed-manager] 2026-04-06 03:08:18.371131 | orchestrator | 2026-04-06 03:08:18.371142 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-06 03:08:18.371169 | orchestrator | Monday 06 April 2026 03:08:14 +0000 (0:00:01.762) 0:00:55.703 ********** 2026-04-06 03:08:18.371198 | orchestrator | changed: [testbed-manager] 2026-04-06 03:08:18.371209 | orchestrator | 2026-04-06 03:08:18.371220 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-06 03:08:18.371231 | orchestrator | Monday 06 April 2026 03:08:15 +0000 (0:00:00.776) 0:00:56.480 ********** 2026-04-06 03:08:18.371242 | orchestrator | changed: [testbed-manager] 2026-04-06 03:08:18.371253 | orchestrator | 2026-04-06 03:08:18.371264 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-06 03:08:18.371275 | orchestrator | Monday 06 April 2026 03:08:16 +0000 (0:00:00.669) 0:00:57.150 ********** 2026-04-06 03:08:18.371286 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-06 03:08:18.371297 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-06 03:08:18.371308 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-06 03:08:18.371319 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-06 03:08:18.371330 | orchestrator | 2026-04-06 03:08:18.371341 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:08:18.371354 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 03:08:18.371366 | orchestrator | 2026-04-06 03:08:18.371377 | orchestrator | 2026-04-06 03:08:18.371408 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:08:18.371420 | orchestrator | Monday 06 April 2026 03:08:17 +0000 (0:00:01.635) 0:00:58.785 ********** 2026-04-06 03:08:18.371431 | orchestrator | =============================================================================== 2026-04-06 03:08:18.371442 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.85s 2026-04-06 03:08:18.371453 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.38s 2026-04-06 03:08:18.371464 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.76s 2026-04-06 03:08:18.371475 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.64s 2026-04-06 03:08:18.371486 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.58s 2026-04-06 03:08:18.371497 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.37s 2026-04-06 03:08:18.371508 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.03s 2026-04-06 03:08:18.371519 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2026-04-06 03:08:18.371530 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2026-04-06 03:08:18.371549 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.67s 2026-04-06 03:08:18.371577 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.54s 2026-04-06 03:08:18.371598 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2026-04-06 03:08:18.371615 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.27s 2026-04-06 03:08:18.371634 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-04-06 03:08:21.024939 | orchestrator | 2026-04-06 03:08:21 | INFO  | Task 27a35f63-bea1-4171-a2aa-f83d4c7c5f1b (ceph-bootstrap-dashboard) was prepared for execution. 2026-04-06 03:08:21.027770 | orchestrator | 2026-04-06 03:08:21 | INFO  | It takes a moment until task 27a35f63-bea1-4171-a2aa-f83d4c7c5f1b (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-04-06 03:09:41.423088 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-06 03:09:41.423263 | orchestrator | 2.16.14 2026-04-06 03:09:41.423304 | orchestrator | 2026-04-06 03:09:41.423325 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-06 03:09:41.423344 | orchestrator | 2026-04-06 03:09:41.423362 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-06 03:09:41.423381 | orchestrator | Monday 06 April 2026 03:08:25 +0000 (0:00:00.286) 0:00:00.286 ********** 2026-04-06 03:09:41.423433 | orchestrator | changed: [testbed-manager] 2026-04-06 03:09:41.423453 | orchestrator | 2026-04-06 03:09:41.423469 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-06 03:09:41.423485 | orchestrator | Monday 06 April 2026 03:08:27 +0000 (0:00:02.070) 0:00:02.356 ********** 2026-04-06 03:09:41.423502 | orchestrator | changed: [testbed-manager] 2026-04-06 03:09:41.423518 | orchestrator | 2026-04-06 03:09:41.423535 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-06 03:09:41.423553 | orchestrator | Monday 06 April 2026 03:08:29 +0000 (0:00:01.206) 0:00:03.563 ********** 2026-04-06 03:09:41.423571 | orchestrator | changed: [testbed-manager] 2026-04-06 03:09:41.423589 | orchestrator | 2026-04-06 03:09:41.423607 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-06 03:09:41.423624 | orchestrator | Monday 06 April 2026 03:08:30 +0000 (0:00:01.109) 0:00:04.672 ********** 2026-04-06 03:09:41.423640 | orchestrator | changed: [testbed-manager] 2026-04-06 03:09:41.423656 | orchestrator | 2026-04-06 03:09:41.423672 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-06 03:09:41.423688 | orchestrator | Monday 06 April 2026 03:08:31 +0000 (0:00:01.198) 0:00:05.871 ********** 2026-04-06 03:09:41.423703 | orchestrator | changed: [testbed-manager] 2026-04-06 03:09:41.423719 | orchestrator | 2026-04-06 03:09:41.423736 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-06 03:09:41.423754 | orchestrator | Monday 06 April 2026 03:08:32 +0000 (0:00:01.102) 0:00:06.973 ********** 2026-04-06 03:09:41.423770 | orchestrator | changed: [testbed-manager] 2026-04-06 03:09:41.423787 | orchestrator | 2026-04-06 03:09:41.423823 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-06 03:09:41.423842 | orchestrator | Monday 06 April 2026 03:08:33 +0000 (0:00:01.165) 0:00:08.138 ********** 2026-04-06 03:09:41.423858 | orchestrator | changed: [testbed-manager] 2026-04-06 03:09:41.423874 | orchestrator | 2026-04-06 03:09:41.423889 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-06 03:09:41.423941 | orchestrator | Monday 06 April 2026 03:08:35 +0000 (0:00:02.116) 0:00:10.255 ********** 2026-04-06 03:09:41.423959 | orchestrator | changed: [testbed-manager] 2026-04-06 03:09:41.423970 | orchestrator | 2026-04-06 03:09:41.423980 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-06 03:09:41.423995 | orchestrator | Monday 06 April 2026 03:08:37 +0000 (0:00:01.284) 0:00:11.539 ********** 2026-04-06 03:09:41.424011 | orchestrator | changed: [testbed-manager] 2026-04-06 03:09:41.424035 | orchestrator | 2026-04-06 03:09:41.424051 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-06 03:09:41.424067 | orchestrator | Monday 06 April 2026 03:09:16 +0000 (0:00:39.334) 0:00:50.874 ********** 2026-04-06 03:09:41.424082 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:09:41.424099 | orchestrator | 2026-04-06 03:09:41.424115 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-06 03:09:41.424133 | orchestrator | 2026-04-06 03:09:41.424148 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-06 03:09:41.424165 | orchestrator | Monday 06 April 2026 03:09:16 +0000 (0:00:00.177) 0:00:51.051 ********** 2026-04-06 03:09:41.424177 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:09:41.424187 | orchestrator | 2026-04-06 03:09:41.424197 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-06 03:09:41.424208 | orchestrator | 2026-04-06 03:09:41.424217 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-06 03:09:41.424227 | orchestrator | Monday 06 April 2026 03:09:28 +0000 (0:00:11.810) 0:01:02.861 ********** 2026-04-06 03:09:41.424237 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:09:41.424247 | orchestrator | 2026-04-06 03:09:41.424256 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-06 03:09:41.424266 | orchestrator | 2026-04-06 03:09:41.424276 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-06 03:09:41.424301 | orchestrator | Monday 06 April 2026 03:09:39 +0000 (0:00:11.243) 0:01:14.105 ********** 2026-04-06 03:09:41.424311 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:09:41.424322 | orchestrator | 2026-04-06 03:09:41.424331 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:09:41.424343 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 03:09:41.424354 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:09:41.424365 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:09:41.424375 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:09:41.424384 | orchestrator | 2026-04-06 03:09:41.424394 | orchestrator | 2026-04-06 03:09:41.424404 | orchestrator | 2026-04-06 03:09:41.424414 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:09:41.424423 | orchestrator | Monday 06 April 2026 03:09:40 +0000 (0:00:01.367) 0:01:15.473 ********** 2026-04-06 03:09:41.424433 | orchestrator | =============================================================================== 2026-04-06 03:09:41.424443 | orchestrator | Create admin user ------------------------------------------------------ 39.33s 2026-04-06 03:09:41.424476 | orchestrator | Restart ceph manager service ------------------------------------------- 24.42s 2026-04-06 03:09:41.424487 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.12s 2026-04-06 03:09:41.424497 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.07s 2026-04-06 03:09:41.424507 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.28s 2026-04-06 03:09:41.424516 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.21s 2026-04-06 03:09:41.424526 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.20s 2026-04-06 03:09:41.424535 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.17s 2026-04-06 03:09:41.424545 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.11s 2026-04-06 03:09:41.424555 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.10s 2026-04-06 03:09:41.424565 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-04-06 03:09:41.780922 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-04-06 03:09:44.160042 | orchestrator | 2026-04-06 03:09:44 | INFO  | Task 546d0989-f358-4be0-a344-34efc107f6e4 (keystone) was prepared for execution. 2026-04-06 03:09:44.160133 | orchestrator | 2026-04-06 03:09:44 | INFO  | It takes a moment until task 546d0989-f358-4be0-a344-34efc107f6e4 (keystone) has been started and output is visible here. 2026-04-06 03:09:52.027009 | orchestrator | 2026-04-06 03:09:52.027110 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:09:52.027121 | orchestrator | 2026-04-06 03:09:52.027126 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:09:52.027132 | orchestrator | Monday 06 April 2026 03:09:48 +0000 (0:00:00.319) 0:00:00.319 ********** 2026-04-06 03:09:52.027137 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:09:52.027144 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:09:52.027163 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:09:52.027168 | orchestrator | 2026-04-06 03:09:52.027173 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:09:52.027178 | orchestrator | Monday 06 April 2026 03:09:49 +0000 (0:00:00.360) 0:00:00.679 ********** 2026-04-06 03:09:52.027183 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-06 03:09:52.027204 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-06 03:09:52.027209 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-06 03:09:52.027214 | orchestrator | 2026-04-06 03:09:52.027218 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-06 03:09:52.027223 | orchestrator | 2026-04-06 03:09:52.027228 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-06 03:09:52.027233 | orchestrator | Monday 06 April 2026 03:09:49 +0000 (0:00:00.516) 0:00:01.196 ********** 2026-04-06 03:09:52.027239 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:09:52.027247 | orchestrator | 2026-04-06 03:09:52.027254 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-06 03:09:52.027268 | orchestrator | Monday 06 April 2026 03:09:50 +0000 (0:00:00.648) 0:00:01.844 ********** 2026-04-06 03:09:52.027282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:09:52.027294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:09:52.027323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:09:52.027348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:09:52.027359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:09:52.027368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:09:52.027376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:09:52.027384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:09:52.027393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:09:52.027402 | orchestrator | 2026-04-06 03:09:52.027410 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-06 03:09:52.027432 | orchestrator | Monday 06 April 2026 03:09:52 +0000 (0:00:01.549) 0:00:03.394 ********** 2026-04-06 03:09:58.055457 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:09:58.055556 | orchestrator | 2026-04-06 03:09:58.055570 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-06 03:09:58.055582 | orchestrator | Monday 06 April 2026 03:09:52 +0000 (0:00:00.335) 0:00:03.730 ********** 2026-04-06 03:09:58.055591 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:09:58.055616 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:09:58.055626 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:09:58.055635 | orchestrator | 2026-04-06 03:09:58.055645 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-06 03:09:58.055654 | orchestrator | Monday 06 April 2026 03:09:52 +0000 (0:00:00.352) 0:00:04.082 ********** 2026-04-06 03:09:58.055663 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:09:58.055673 | orchestrator | 2026-04-06 03:09:58.055696 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-06 03:09:58.055714 | orchestrator | Monday 06 April 2026 03:09:53 +0000 (0:00:00.949) 0:00:05.032 ********** 2026-04-06 03:09:58.055723 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:09:58.055733 | orchestrator | 2026-04-06 03:09:58.055740 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-06 03:09:58.055746 | orchestrator | Monday 06 April 2026 03:09:54 +0000 (0:00:00.633) 0:00:05.666 ********** 2026-04-06 03:09:58.055757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:09:58.055767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:09:58.055774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:09:58.055819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:09:58.055828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:09:58.055835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:09:58.055841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:09:58.055847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:09:58.055852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:09:58.055863 | orchestrator | 2026-04-06 03:09:58.055870 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-06 03:09:58.055875 | orchestrator | Monday 06 April 2026 03:09:57 +0000 (0:00:03.122) 0:00:08.788 ********** 2026-04-06 03:09:58.055887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 03:09:58.890257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:09:58.890361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 03:09:58.890369 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:09:58.890375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 03:09:58.890395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:09:58.890403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 03:09:58.890407 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:09:58.890422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 03:09:58.890426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:09:58.890431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 03:09:58.890434 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:09:58.890438 | orchestrator | 2026-04-06 03:09:58.890447 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-06 03:09:58.890453 | orchestrator | Monday 06 April 2026 03:09:58 +0000 (0:00:00.640) 0:00:09.428 ********** 2026-04-06 03:09:58.890457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 03:09:58.890464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:09:58.890473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 03:10:02.136069 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:10:02.136193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 03:10:02.136221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:10:02.136267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 03:10:02.136284 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:10:02.136321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 03:10:02.136338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:10:02.136368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 03:10:02.136378 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:10:02.136387 | orchestrator | 2026-04-06 03:10:02.136397 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-06 03:10:02.136408 | orchestrator | Monday 06 April 2026 03:09:58 +0000 (0:00:00.834) 0:00:10.263 ********** 2026-04-06 03:10:02.136418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:10:02.136437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:10:02.136454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:10:02.136474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:10:07.130740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:10:07.130849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:10:07.130859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:10:07.130867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:10:07.130888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:10:07.130896 | orchestrator | 2026-04-06 03:10:07.130904 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-06 03:10:07.130912 | orchestrator | Monday 06 April 2026 03:10:02 +0000 (0:00:03.240) 0:00:13.504 ********** 2026-04-06 03:10:07.130994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:10:07.131005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:10:07.131019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:10:07.131027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:10:07.131039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:10:07.131053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:10:11.005268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:10:11.005390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:10:11.005402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:10:11.005410 | orchestrator | 2026-04-06 03:10:11.005418 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-06 03:10:11.005426 | orchestrator | Monday 06 April 2026 03:10:07 +0000 (0:00:04.994) 0:00:18.498 ********** 2026-04-06 03:10:11.005433 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:10:11.005441 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:10:11.005446 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:10:11.005453 | orchestrator | 2026-04-06 03:10:11.005460 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-06 03:10:11.005467 | orchestrator | Monday 06 April 2026 03:10:08 +0000 (0:00:01.450) 0:00:19.949 ********** 2026-04-06 03:10:11.005474 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:10:11.005480 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:10:11.005487 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:10:11.005493 | orchestrator | 2026-04-06 03:10:11.005499 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-06 03:10:11.005505 | orchestrator | Monday 06 April 2026 03:10:09 +0000 (0:00:00.869) 0:00:20.819 ********** 2026-04-06 03:10:11.005512 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:10:11.005518 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:10:11.005525 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:10:11.005531 | orchestrator | 2026-04-06 03:10:11.005537 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-06 03:10:11.005560 | orchestrator | Monday 06 April 2026 03:10:10 +0000 (0:00:00.565) 0:00:21.384 ********** 2026-04-06 03:10:11.005568 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:10:11.005574 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:10:11.005581 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:10:11.005588 | orchestrator | 2026-04-06 03:10:11.005594 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-06 03:10:11.005602 | orchestrator | Monday 06 April 2026 03:10:10 +0000 (0:00:00.323) 0:00:21.707 ********** 2026-04-06 03:10:11.005628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 03:10:11.005645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:10:11.005653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 03:10:11.005660 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:10:11.005667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 03:10:11.005678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:10:11.005686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 03:10:11.005698 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:10:11.005713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-06 03:10:30.805648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 03:10:30.805797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 03:10:30.805842 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:10:30.805863 | orchestrator | 2026-04-06 03:10:30.805881 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-06 03:10:30.805901 | orchestrator | Monday 06 April 2026 03:10:10 +0000 (0:00:00.669) 0:00:22.376 ********** 2026-04-06 03:10:30.805919 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:10:30.805961 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:10:30.805980 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:10:30.805999 | orchestrator | 2026-04-06 03:10:30.806074 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-06 03:10:30.806087 | orchestrator | Monday 06 April 2026 03:10:11 +0000 (0:00:00.317) 0:00:22.694 ********** 2026-04-06 03:10:30.806097 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-06 03:10:30.806109 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-06 03:10:30.806119 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-06 03:10:30.806152 | orchestrator | 2026-04-06 03:10:30.806165 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-06 03:10:30.806177 | orchestrator | Monday 06 April 2026 03:10:13 +0000 (0:00:01.836) 0:00:24.530 ********** 2026-04-06 03:10:30.806204 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:10:30.806215 | orchestrator | 2026-04-06 03:10:30.806236 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-06 03:10:30.806248 | orchestrator | Monday 06 April 2026 03:10:14 +0000 (0:00:00.977) 0:00:25.508 ********** 2026-04-06 03:10:30.806259 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:10:30.806271 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:10:30.806282 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:10:30.806293 | orchestrator | 2026-04-06 03:10:30.806308 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-06 03:10:30.806325 | orchestrator | Monday 06 April 2026 03:10:14 +0000 (0:00:00.621) 0:00:26.129 ********** 2026-04-06 03:10:30.806341 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:10:30.806356 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 03:10:30.806372 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 03:10:30.806388 | orchestrator | 2026-04-06 03:10:30.806405 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-06 03:10:30.806422 | orchestrator | Monday 06 April 2026 03:10:16 +0000 (0:00:01.326) 0:00:27.456 ********** 2026-04-06 03:10:30.806439 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:10:30.806457 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:10:30.806474 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:10:30.806491 | orchestrator | 2026-04-06 03:10:30.806507 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-06 03:10:30.806525 | orchestrator | Monday 06 April 2026 03:10:16 +0000 (0:00:00.592) 0:00:28.048 ********** 2026-04-06 03:10:30.806544 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-06 03:10:30.806561 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-06 03:10:30.806578 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-06 03:10:30.806588 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-06 03:10:30.806598 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-06 03:10:30.806608 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-06 03:10:30.806617 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-06 03:10:30.806627 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-06 03:10:30.806658 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-06 03:10:30.806671 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-06 03:10:30.806686 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-06 03:10:30.806711 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-06 03:10:30.806729 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-06 03:10:30.806745 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-06 03:10:30.806760 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-06 03:10:30.806775 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 03:10:30.806790 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 03:10:30.806819 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 03:10:30.806835 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 03:10:30.806851 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 03:10:30.806867 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 03:10:30.806884 | orchestrator | 2026-04-06 03:10:30.806900 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-06 03:10:30.806916 | orchestrator | Monday 06 April 2026 03:10:25 +0000 (0:00:09.051) 0:00:37.100 ********** 2026-04-06 03:10:30.807004 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 03:10:30.807017 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 03:10:30.807027 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 03:10:30.807036 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 03:10:30.807046 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 03:10:30.807055 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 03:10:30.807065 | orchestrator | 2026-04-06 03:10:30.807074 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-06 03:10:30.807083 | orchestrator | Monday 06 April 2026 03:10:28 +0000 (0:00:02.670) 0:00:39.770 ********** 2026-04-06 03:10:30.807118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:10:30.807145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:12:08.046373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-06 03:12:08.046546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:12:08.046574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:12:08.046579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 03:12:08.046583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:12:08.046605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:12:08.046616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 03:12:08.046620 | orchestrator | 2026-04-06 03:12:08.046625 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-06 03:12:08.046630 | orchestrator | Monday 06 April 2026 03:10:30 +0000 (0:00:02.398) 0:00:42.169 ********** 2026-04-06 03:12:08.046634 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:12:08.046639 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:12:08.046643 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:12:08.046647 | orchestrator | 2026-04-06 03:12:08.046651 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-06 03:12:08.046655 | orchestrator | Monday 06 April 2026 03:10:31 +0000 (0:00:00.560) 0:00:42.730 ********** 2026-04-06 03:12:08.046659 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:12:08.046663 | orchestrator | 2026-04-06 03:12:08.046669 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-06 03:12:08.046676 | orchestrator | Monday 06 April 2026 03:10:33 +0000 (0:00:02.083) 0:00:44.814 ********** 2026-04-06 03:12:08.046685 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:12:08.046693 | orchestrator | 2026-04-06 03:12:08.046699 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-06 03:12:08.046706 | orchestrator | Monday 06 April 2026 03:10:35 +0000 (0:00:01.942) 0:00:46.757 ********** 2026-04-06 03:12:08.046712 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:12:08.046718 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:12:08.046724 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:12:08.046730 | orchestrator | 2026-04-06 03:12:08.046737 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-06 03:12:08.046743 | orchestrator | Monday 06 April 2026 03:10:36 +0000 (0:00:00.848) 0:00:47.605 ********** 2026-04-06 03:12:08.046749 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:12:08.046756 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:12:08.046760 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:12:08.046764 | orchestrator | 2026-04-06 03:12:08.046768 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-06 03:12:08.046773 | orchestrator | Monday 06 April 2026 03:10:36 +0000 (0:00:00.367) 0:00:47.972 ********** 2026-04-06 03:12:08.046777 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:12:08.046785 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:12:08.046789 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:12:08.046795 | orchestrator | 2026-04-06 03:12:08.046801 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-06 03:12:08.046807 | orchestrator | Monday 06 April 2026 03:10:37 +0000 (0:00:00.593) 0:00:48.566 ********** 2026-04-06 03:12:08.046813 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:12:08.046819 | orchestrator | 2026-04-06 03:12:08.046824 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-06 03:12:08.046830 | orchestrator | Monday 06 April 2026 03:10:51 +0000 (0:00:14.765) 0:01:03.332 ********** 2026-04-06 03:12:08.046836 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:12:08.046842 | orchestrator | 2026-04-06 03:12:08.046848 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-06 03:12:08.046855 | orchestrator | Monday 06 April 2026 03:11:02 +0000 (0:00:10.487) 0:01:13.820 ********** 2026-04-06 03:12:08.046861 | orchestrator | 2026-04-06 03:12:08.046867 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-06 03:12:08.046881 | orchestrator | Monday 06 April 2026 03:11:02 +0000 (0:00:00.075) 0:01:13.895 ********** 2026-04-06 03:12:08.046888 | orchestrator | 2026-04-06 03:12:08.046894 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-06 03:12:08.046901 | orchestrator | Monday 06 April 2026 03:11:02 +0000 (0:00:00.079) 0:01:13.975 ********** 2026-04-06 03:12:08.046907 | orchestrator | 2026-04-06 03:12:08.046912 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-06 03:12:08.046919 | orchestrator | Monday 06 April 2026 03:11:02 +0000 (0:00:00.082) 0:01:14.058 ********** 2026-04-06 03:12:08.046924 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:12:08.046930 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:12:08.046937 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:12:08.046943 | orchestrator | 2026-04-06 03:12:08.046950 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-06 03:12:08.046956 | orchestrator | Monday 06 April 2026 03:11:51 +0000 (0:00:49.226) 0:02:03.285 ********** 2026-04-06 03:12:08.046963 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:12:08.046970 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:12:08.046976 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:12:08.046983 | orchestrator | 2026-04-06 03:12:08.047032 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-06 03:12:08.047037 | orchestrator | Monday 06 April 2026 03:12:00 +0000 (0:00:08.220) 0:02:11.505 ********** 2026-04-06 03:12:08.047042 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:12:08.047047 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:12:08.047051 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:12:08.047056 | orchestrator | 2026-04-06 03:12:08.047061 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-06 03:12:08.047066 | orchestrator | Monday 06 April 2026 03:12:07 +0000 (0:00:07.277) 0:02:18.783 ********** 2026-04-06 03:12:08.047077 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:12:59.133525 | orchestrator | 2026-04-06 03:12:59.133638 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-06 03:12:59.133652 | orchestrator | Monday 06 April 2026 03:12:08 +0000 (0:00:00.632) 0:02:19.416 ********** 2026-04-06 03:12:59.133661 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:12:59.133667 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:12:59.133672 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:12:59.133677 | orchestrator | 2026-04-06 03:12:59.133681 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-06 03:12:59.133686 | orchestrator | Monday 06 April 2026 03:12:09 +0000 (0:00:01.293) 0:02:20.710 ********** 2026-04-06 03:12:59.133691 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:12:59.133696 | orchestrator | 2026-04-06 03:12:59.133700 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-06 03:12:59.133705 | orchestrator | Monday 06 April 2026 03:12:11 +0000 (0:00:01.805) 0:02:22.516 ********** 2026-04-06 03:12:59.133709 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-06 03:12:59.133714 | orchestrator | 2026-04-06 03:12:59.133718 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-06 03:12:59.133722 | orchestrator | Monday 06 April 2026 03:12:22 +0000 (0:00:11.746) 0:02:34.262 ********** 2026-04-06 03:12:59.133726 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-06 03:12:59.133731 | orchestrator | 2026-04-06 03:12:59.133735 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-06 03:12:59.133739 | orchestrator | Monday 06 April 2026 03:12:47 +0000 (0:00:24.441) 0:02:58.704 ********** 2026-04-06 03:12:59.133743 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-06 03:12:59.133750 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-06 03:12:59.133778 | orchestrator | 2026-04-06 03:12:59.133785 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-06 03:12:59.133792 | orchestrator | Monday 06 April 2026 03:12:53 +0000 (0:00:06.520) 0:03:05.224 ********** 2026-04-06 03:12:59.133798 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:12:59.133805 | orchestrator | 2026-04-06 03:12:59.133811 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-06 03:12:59.133817 | orchestrator | Monday 06 April 2026 03:12:53 +0000 (0:00:00.145) 0:03:05.370 ********** 2026-04-06 03:12:59.133822 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:12:59.133828 | orchestrator | 2026-04-06 03:12:59.133835 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-06 03:12:59.133842 | orchestrator | Monday 06 April 2026 03:12:54 +0000 (0:00:00.143) 0:03:05.513 ********** 2026-04-06 03:12:59.133848 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:12:59.133855 | orchestrator | 2026-04-06 03:12:59.133862 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-06 03:12:59.133882 | orchestrator | Monday 06 April 2026 03:12:54 +0000 (0:00:00.152) 0:03:05.665 ********** 2026-04-06 03:12:59.133889 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:12:59.133896 | orchestrator | 2026-04-06 03:12:59.133902 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-06 03:12:59.133908 | orchestrator | Monday 06 April 2026 03:12:54 +0000 (0:00:00.607) 0:03:06.273 ********** 2026-04-06 03:12:59.133915 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:12:59.133922 | orchestrator | 2026-04-06 03:12:59.133926 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-06 03:12:59.133930 | orchestrator | Monday 06 April 2026 03:12:58 +0000 (0:00:03.268) 0:03:09.542 ********** 2026-04-06 03:12:59.133935 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:12:59.133939 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:12:59.133943 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:12:59.133947 | orchestrator | 2026-04-06 03:12:59.133951 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:12:59.133957 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 03:12:59.133962 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-06 03:12:59.133967 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-06 03:12:59.133971 | orchestrator | 2026-04-06 03:12:59.133975 | orchestrator | 2026-04-06 03:12:59.133979 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:12:59.133983 | orchestrator | Monday 06 April 2026 03:12:58 +0000 (0:00:00.491) 0:03:10.033 ********** 2026-04-06 03:12:59.133988 | orchestrator | =============================================================================== 2026-04-06 03:12:59.133992 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 49.23s 2026-04-06 03:12:59.133996 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.44s 2026-04-06 03:12:59.134000 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.77s 2026-04-06 03:12:59.134004 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.75s 2026-04-06 03:12:59.134008 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.49s 2026-04-06 03:12:59.134139 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.05s 2026-04-06 03:12:59.134149 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 8.22s 2026-04-06 03:12:59.134156 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.28s 2026-04-06 03:12:59.134163 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.52s 2026-04-06 03:12:59.134199 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.99s 2026-04-06 03:12:59.134207 | orchestrator | keystone : Creating default user role ----------------------------------- 3.27s 2026-04-06 03:12:59.134214 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.24s 2026-04-06 03:12:59.134220 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.12s 2026-04-06 03:12:59.134226 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.67s 2026-04-06 03:12:59.134233 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.40s 2026-04-06 03:12:59.134240 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.08s 2026-04-06 03:12:59.134246 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 1.94s 2026-04-06 03:12:59.134254 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.84s 2026-04-06 03:12:59.134261 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.81s 2026-04-06 03:12:59.134268 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.55s 2026-04-06 03:13:02.134449 | orchestrator | 2026-04-06 03:13:02 | INFO  | Task d14d823e-ad96-436e-aae9-fa41b84ce1c6 (placement) was prepared for execution. 2026-04-06 03:13:02.134580 | orchestrator | 2026-04-06 03:13:02 | INFO  | It takes a moment until task d14d823e-ad96-436e-aae9-fa41b84ce1c6 (placement) has been started and output is visible here. 2026-04-06 03:13:38.704633 | orchestrator | 2026-04-06 03:13:38.704797 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:13:38.704836 | orchestrator | 2026-04-06 03:13:38.704849 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:13:38.704861 | orchestrator | Monday 06 April 2026 03:13:06 +0000 (0:00:00.331) 0:00:00.331 ********** 2026-04-06 03:13:38.704872 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:13:38.704885 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:13:38.704896 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:13:38.704909 | orchestrator | 2026-04-06 03:13:38.704920 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:13:38.704932 | orchestrator | Monday 06 April 2026 03:13:07 +0000 (0:00:00.350) 0:00:00.681 ********** 2026-04-06 03:13:38.704943 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-06 03:13:38.704955 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-06 03:13:38.704966 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-06 03:13:38.704977 | orchestrator | 2026-04-06 03:13:38.704988 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-06 03:13:38.704999 | orchestrator | 2026-04-06 03:13:38.705028 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-06 03:13:38.705092 | orchestrator | Monday 06 April 2026 03:13:07 +0000 (0:00:00.501) 0:00:01.182 ********** 2026-04-06 03:13:38.705104 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:13:38.705117 | orchestrator | 2026-04-06 03:13:38.705128 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-06 03:13:38.705140 | orchestrator | Monday 06 April 2026 03:13:08 +0000 (0:00:00.602) 0:00:01.784 ********** 2026-04-06 03:13:38.705151 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-06 03:13:38.705164 | orchestrator | 2026-04-06 03:13:38.705179 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-06 03:13:38.705199 | orchestrator | Monday 06 April 2026 03:13:12 +0000 (0:00:03.983) 0:00:05.768 ********** 2026-04-06 03:13:38.705217 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-06 03:13:38.705236 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-06 03:13:38.705292 | orchestrator | 2026-04-06 03:13:38.705313 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-06 03:13:38.705332 | orchestrator | Monday 06 April 2026 03:13:19 +0000 (0:00:06.822) 0:00:12.590 ********** 2026-04-06 03:13:38.705352 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-06 03:13:38.705371 | orchestrator | 2026-04-06 03:13:38.705389 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-06 03:13:38.705408 | orchestrator | Monday 06 April 2026 03:13:22 +0000 (0:00:03.699) 0:00:16.290 ********** 2026-04-06 03:13:38.705426 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:13:38.705443 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-06 03:13:38.705462 | orchestrator | 2026-04-06 03:13:38.705481 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-06 03:13:38.705500 | orchestrator | Monday 06 April 2026 03:13:27 +0000 (0:00:04.223) 0:00:20.513 ********** 2026-04-06 03:13:38.705520 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:13:38.705539 | orchestrator | 2026-04-06 03:13:38.705558 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-06 03:13:38.705576 | orchestrator | Monday 06 April 2026 03:13:30 +0000 (0:00:03.169) 0:00:23.683 ********** 2026-04-06 03:13:38.705596 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-06 03:13:38.705615 | orchestrator | 2026-04-06 03:13:38.705633 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-06 03:13:38.705647 | orchestrator | Monday 06 April 2026 03:13:34 +0000 (0:00:04.145) 0:00:27.828 ********** 2026-04-06 03:13:38.705658 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:13:38.705671 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:13:38.705690 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:13:38.705708 | orchestrator | 2026-04-06 03:13:38.705727 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-06 03:13:38.705745 | orchestrator | Monday 06 April 2026 03:13:34 +0000 (0:00:00.322) 0:00:28.151 ********** 2026-04-06 03:13:38.705769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:38.705815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:38.705837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:38.705861 | orchestrator | 2026-04-06 03:13:38.705872 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-06 03:13:38.705884 | orchestrator | Monday 06 April 2026 03:13:35 +0000 (0:00:00.851) 0:00:29.002 ********** 2026-04-06 03:13:38.705895 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:13:38.705906 | orchestrator | 2026-04-06 03:13:38.705917 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-06 03:13:38.705928 | orchestrator | Monday 06 April 2026 03:13:36 +0000 (0:00:00.362) 0:00:29.365 ********** 2026-04-06 03:13:38.705939 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:13:38.705950 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:13:38.705961 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:13:38.705972 | orchestrator | 2026-04-06 03:13:38.705983 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-06 03:13:38.705994 | orchestrator | Monday 06 April 2026 03:13:36 +0000 (0:00:00.328) 0:00:29.694 ********** 2026-04-06 03:13:38.706006 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:13:38.706097 | orchestrator | 2026-04-06 03:13:38.706111 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-06 03:13:38.706123 | orchestrator | Monday 06 April 2026 03:13:36 +0000 (0:00:00.617) 0:00:30.312 ********** 2026-04-06 03:13:38.706135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:38.706194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:41.768136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:41.768236 | orchestrator | 2026-04-06 03:13:41.768249 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-06 03:13:41.768260 | orchestrator | Monday 06 April 2026 03:13:38 +0000 (0:00:01.742) 0:00:32.054 ********** 2026-04-06 03:13:41.768270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 03:13:41.768278 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:13:41.768287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 03:13:41.768297 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:13:41.768306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 03:13:41.768334 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:13:41.768340 | orchestrator | 2026-04-06 03:13:41.768346 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-06 03:13:41.768365 | orchestrator | Monday 06 April 2026 03:13:39 +0000 (0:00:00.648) 0:00:32.703 ********** 2026-04-06 03:13:41.768378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 03:13:41.768384 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:13:41.768390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 03:13:41.768396 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:13:41.768401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 03:13:41.768407 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:13:41.768412 | orchestrator | 2026-04-06 03:13:41.768417 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-06 03:13:41.768422 | orchestrator | Monday 06 April 2026 03:13:40 +0000 (0:00:00.760) 0:00:33.464 ********** 2026-04-06 03:13:41.768428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:41.768446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:49.210524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:49.210655 | orchestrator | 2026-04-06 03:13:49.210676 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-06 03:13:49.210691 | orchestrator | Monday 06 April 2026 03:13:41 +0000 (0:00:01.653) 0:00:35.118 ********** 2026-04-06 03:13:49.210711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:49.210726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:49.210784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:13:49.210804 | orchestrator | 2026-04-06 03:13:49.210818 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-06 03:13:49.210830 | orchestrator | Monday 06 April 2026 03:13:44 +0000 (0:00:02.555) 0:00:37.674 ********** 2026-04-06 03:13:49.210862 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-06 03:13:49.210877 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-06 03:13:49.210889 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-06 03:13:49.210898 | orchestrator | 2026-04-06 03:13:49.210906 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-06 03:13:49.210913 | orchestrator | Monday 06 April 2026 03:13:45 +0000 (0:00:01.527) 0:00:39.201 ********** 2026-04-06 03:13:49.210921 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:13:49.210929 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:13:49.210936 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:13:49.210944 | orchestrator | 2026-04-06 03:13:49.210951 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-06 03:13:49.210958 | orchestrator | Monday 06 April 2026 03:13:47 +0000 (0:00:01.366) 0:00:40.568 ********** 2026-04-06 03:13:49.210966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 03:13:49.210974 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:13:49.210982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 03:13:49.211000 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:13:49.211008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-06 03:13:49.211015 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:13:49.211022 | orchestrator | 2026-04-06 03:13:49.211031 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-06 03:13:49.211096 | orchestrator | Monday 06 April 2026 03:13:48 +0000 (0:00:00.863) 0:00:41.431 ********** 2026-04-06 03:13:49.211121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:14:18.785007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:14:18.785273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-06 03:14:18.785322 | orchestrator | 2026-04-06 03:14:18.785337 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-06 03:14:18.785351 | orchestrator | Monday 06 April 2026 03:13:49 +0000 (0:00:01.135) 0:00:42.567 ********** 2026-04-06 03:14:18.785362 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:14:18.785374 | orchestrator | 2026-04-06 03:14:18.785385 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-06 03:14:18.785396 | orchestrator | Monday 06 April 2026 03:13:51 +0000 (0:00:02.065) 0:00:44.633 ********** 2026-04-06 03:14:18.785407 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:14:18.785418 | orchestrator | 2026-04-06 03:14:18.785429 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-06 03:14:18.785440 | orchestrator | Monday 06 April 2026 03:13:53 +0000 (0:00:02.153) 0:00:46.787 ********** 2026-04-06 03:14:18.785451 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:14:18.785469 | orchestrator | 2026-04-06 03:14:18.785492 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-06 03:14:18.785520 | orchestrator | Monday 06 April 2026 03:14:07 +0000 (0:00:14.183) 0:01:00.970 ********** 2026-04-06 03:14:18.785538 | orchestrator | 2026-04-06 03:14:18.785556 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-06 03:14:18.785574 | orchestrator | Monday 06 April 2026 03:14:07 +0000 (0:00:00.074) 0:01:01.045 ********** 2026-04-06 03:14:18.785592 | orchestrator | 2026-04-06 03:14:18.785609 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-06 03:14:18.785626 | orchestrator | Monday 06 April 2026 03:14:07 +0000 (0:00:00.075) 0:01:01.120 ********** 2026-04-06 03:14:18.785643 | orchestrator | 2026-04-06 03:14:18.785660 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-06 03:14:18.785678 | orchestrator | Monday 06 April 2026 03:14:07 +0000 (0:00:00.073) 0:01:01.194 ********** 2026-04-06 03:14:18.785695 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:14:18.785711 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:14:18.785729 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:14:18.785747 | orchestrator | 2026-04-06 03:14:18.785787 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:14:18.785812 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 03:14:18.785838 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 03:14:18.785854 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 03:14:18.785870 | orchestrator | 2026-04-06 03:14:18.785885 | orchestrator | 2026-04-06 03:14:18.785901 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:14:18.785916 | orchestrator | Monday 06 April 2026 03:14:18 +0000 (0:00:10.511) 0:01:11.706 ********** 2026-04-06 03:14:18.785932 | orchestrator | =============================================================================== 2026-04-06 03:14:18.785948 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.18s 2026-04-06 03:14:18.786005 | orchestrator | placement : Restart placement-api container ---------------------------- 10.51s 2026-04-06 03:14:18.786373 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.82s 2026-04-06 03:14:18.786395 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.22s 2026-04-06 03:14:18.786414 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.15s 2026-04-06 03:14:18.786431 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.98s 2026-04-06 03:14:18.786450 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.70s 2026-04-06 03:14:18.786468 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.17s 2026-04-06 03:14:18.786487 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.56s 2026-04-06 03:14:18.786505 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.15s 2026-04-06 03:14:18.786524 | orchestrator | placement : Creating placement databases -------------------------------- 2.07s 2026-04-06 03:14:18.786541 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.74s 2026-04-06 03:14:18.786561 | orchestrator | placement : Copying over config.json files for services ----------------- 1.65s 2026-04-06 03:14:18.786580 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.53s 2026-04-06 03:14:18.786599 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.37s 2026-04-06 03:14:18.786618 | orchestrator | placement : Check placement containers ---------------------------------- 1.14s 2026-04-06 03:14:18.786637 | orchestrator | placement : Copying over existing policy file --------------------------- 0.86s 2026-04-06 03:14:18.786656 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.85s 2026-04-06 03:14:18.786675 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.76s 2026-04-06 03:14:18.786694 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.65s 2026-04-06 03:14:21.518387 | orchestrator | 2026-04-06 03:14:21 | INFO  | Task 9d367ada-adb8-498c-ab59-c7f03b1d7fcd (neutron) was prepared for execution. 2026-04-06 03:14:21.518519 | orchestrator | 2026-04-06 03:14:21 | INFO  | It takes a moment until task 9d367ada-adb8-498c-ab59-c7f03b1d7fcd (neutron) has been started and output is visible here. 2026-04-06 03:15:12.188763 | orchestrator | 2026-04-06 03:15:12.188884 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:15:12.188902 | orchestrator | 2026-04-06 03:15:12.188915 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:15:12.188927 | orchestrator | Monday 06 April 2026 03:14:26 +0000 (0:00:00.301) 0:00:00.301 ********** 2026-04-06 03:15:12.188938 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:15:12.188991 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:15:12.189003 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:15:12.189014 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:15:12.189025 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:15:12.189036 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:15:12.189047 | orchestrator | 2026-04-06 03:15:12.189058 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:15:12.189069 | orchestrator | Monday 06 April 2026 03:14:27 +0000 (0:00:00.762) 0:00:01.063 ********** 2026-04-06 03:15:12.189081 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-06 03:15:12.189092 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-06 03:15:12.189103 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-06 03:15:12.189114 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-06 03:15:12.189125 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-06 03:15:12.189136 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-06 03:15:12.189174 | orchestrator | 2026-04-06 03:15:12.189186 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-06 03:15:12.189197 | orchestrator | 2026-04-06 03:15:12.189208 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-06 03:15:12.189219 | orchestrator | Monday 06 April 2026 03:14:27 +0000 (0:00:00.683) 0:00:01.746 ********** 2026-04-06 03:15:12.189231 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:15:12.189244 | orchestrator | 2026-04-06 03:15:12.189270 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-06 03:15:12.189282 | orchestrator | Monday 06 April 2026 03:14:29 +0000 (0:00:01.328) 0:00:03.074 ********** 2026-04-06 03:15:12.189295 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:15:12.189308 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:15:12.189320 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:15:12.189332 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:15:12.189344 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:15:12.189357 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:15:12.189370 | orchestrator | 2026-04-06 03:15:12.189383 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-06 03:15:12.189396 | orchestrator | Monday 06 April 2026 03:14:30 +0000 (0:00:01.422) 0:00:04.497 ********** 2026-04-06 03:15:12.189409 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:15:12.189422 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:15:12.189434 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:15:12.189446 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:15:12.189459 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:15:12.189471 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:15:12.189483 | orchestrator | 2026-04-06 03:15:12.189497 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-06 03:15:12.189508 | orchestrator | Monday 06 April 2026 03:14:31 +0000 (0:00:01.162) 0:00:05.659 ********** 2026-04-06 03:15:12.189519 | orchestrator | ok: [testbed-node-0] => { 2026-04-06 03:15:12.189531 | orchestrator |  "changed": false, 2026-04-06 03:15:12.189542 | orchestrator |  "msg": "All assertions passed" 2026-04-06 03:15:12.189553 | orchestrator | } 2026-04-06 03:15:12.189564 | orchestrator | ok: [testbed-node-1] => { 2026-04-06 03:15:12.189575 | orchestrator |  "changed": false, 2026-04-06 03:15:12.189586 | orchestrator |  "msg": "All assertions passed" 2026-04-06 03:15:12.189597 | orchestrator | } 2026-04-06 03:15:12.189607 | orchestrator | ok: [testbed-node-2] => { 2026-04-06 03:15:12.189618 | orchestrator |  "changed": false, 2026-04-06 03:15:12.189629 | orchestrator |  "msg": "All assertions passed" 2026-04-06 03:15:12.189640 | orchestrator | } 2026-04-06 03:15:12.189651 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 03:15:12.189662 | orchestrator |  "changed": false, 2026-04-06 03:15:12.189673 | orchestrator |  "msg": "All assertions passed" 2026-04-06 03:15:12.189684 | orchestrator | } 2026-04-06 03:15:12.189694 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 03:15:12.189705 | orchestrator |  "changed": false, 2026-04-06 03:15:12.189716 | orchestrator |  "msg": "All assertions passed" 2026-04-06 03:15:12.189728 | orchestrator | } 2026-04-06 03:15:12.189739 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 03:15:12.189750 | orchestrator |  "changed": false, 2026-04-06 03:15:12.189761 | orchestrator |  "msg": "All assertions passed" 2026-04-06 03:15:12.189773 | orchestrator | } 2026-04-06 03:15:12.189783 | orchestrator | 2026-04-06 03:15:12.189795 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-06 03:15:12.189805 | orchestrator | Monday 06 April 2026 03:14:32 +0000 (0:00:00.905) 0:00:06.565 ********** 2026-04-06 03:15:12.189816 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:12.189827 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:15:12.189838 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:15:12.189849 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:12.189860 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:12.189880 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:12.189891 | orchestrator | 2026-04-06 03:15:12.189902 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-06 03:15:12.189913 | orchestrator | Monday 06 April 2026 03:14:33 +0000 (0:00:00.678) 0:00:07.243 ********** 2026-04-06 03:15:12.189924 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-06 03:15:12.189935 | orchestrator | 2026-04-06 03:15:12.190080 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-06 03:15:12.190098 | orchestrator | Monday 06 April 2026 03:14:37 +0000 (0:00:03.945) 0:00:11.189 ********** 2026-04-06 03:15:12.190110 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-06 03:15:12.190122 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-06 03:15:12.190133 | orchestrator | 2026-04-06 03:15:12.190164 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-06 03:15:12.190176 | orchestrator | Monday 06 April 2026 03:14:43 +0000 (0:00:06.479) 0:00:17.668 ********** 2026-04-06 03:15:12.190187 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:15:12.190198 | orchestrator | 2026-04-06 03:15:12.190209 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-06 03:15:12.190220 | orchestrator | Monday 06 April 2026 03:14:46 +0000 (0:00:02.809) 0:00:20.478 ********** 2026-04-06 03:15:12.190231 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:15:12.190241 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-06 03:15:12.190253 | orchestrator | 2026-04-06 03:15:12.190263 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-06 03:15:12.190274 | orchestrator | Monday 06 April 2026 03:14:50 +0000 (0:00:03.920) 0:00:24.398 ********** 2026-04-06 03:15:12.190285 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:15:12.190296 | orchestrator | 2026-04-06 03:15:12.190307 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-06 03:15:12.190318 | orchestrator | Monday 06 April 2026 03:14:53 +0000 (0:00:03.288) 0:00:27.687 ********** 2026-04-06 03:15:12.190329 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-06 03:15:12.190339 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-06 03:15:12.190350 | orchestrator | 2026-04-06 03:15:12.190361 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-06 03:15:12.190372 | orchestrator | Monday 06 April 2026 03:15:02 +0000 (0:00:08.473) 0:00:36.160 ********** 2026-04-06 03:15:12.190383 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:12.190394 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:15:12.190405 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:15:12.190416 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:12.190427 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:12.190438 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:12.190449 | orchestrator | 2026-04-06 03:15:12.190467 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-06 03:15:12.190479 | orchestrator | Monday 06 April 2026 03:15:03 +0000 (0:00:00.867) 0:00:37.028 ********** 2026-04-06 03:15:12.190490 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:12.190501 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:15:12.190512 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:15:12.190522 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:12.190533 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:12.190544 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:12.190555 | orchestrator | 2026-04-06 03:15:12.190566 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-06 03:15:12.190577 | orchestrator | Monday 06 April 2026 03:15:05 +0000 (0:00:02.423) 0:00:39.451 ********** 2026-04-06 03:15:12.190587 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:15:12.190608 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:15:12.190619 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:15:12.190630 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:15:12.190641 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:15:12.190652 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:15:12.190662 | orchestrator | 2026-04-06 03:15:12.190673 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-06 03:15:12.190684 | orchestrator | Monday 06 April 2026 03:15:06 +0000 (0:00:01.306) 0:00:40.758 ********** 2026-04-06 03:15:12.190695 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:15:12.190706 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:12.190717 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:15:12.190728 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:12.190739 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:12.190750 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:12.190761 | orchestrator | 2026-04-06 03:15:12.190772 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-06 03:15:12.190782 | orchestrator | Monday 06 April 2026 03:15:09 +0000 (0:00:02.589) 0:00:43.347 ********** 2026-04-06 03:15:12.190798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:12.190822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:18.246269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:18.246421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:18.246467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:18.246483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:18.246495 | orchestrator | 2026-04-06 03:15:18.246509 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-06 03:15:18.246522 | orchestrator | Monday 06 April 2026 03:15:12 +0000 (0:00:02.829) 0:00:46.177 ********** 2026-04-06 03:15:18.246534 | orchestrator | [WARNING]: Skipped 2026-04-06 03:15:18.246546 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-06 03:15:18.246558 | orchestrator | due to this access issue: 2026-04-06 03:15:18.246570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-06 03:15:18.246582 | orchestrator | a directory 2026-04-06 03:15:18.246593 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:15:18.246604 | orchestrator | 2026-04-06 03:15:18.246615 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-06 03:15:18.246626 | orchestrator | Monday 06 April 2026 03:15:13 +0000 (0:00:00.887) 0:00:47.065 ********** 2026-04-06 03:15:18.246661 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:15:18.246683 | orchestrator | 2026-04-06 03:15:18.246699 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-06 03:15:18.246714 | orchestrator | Monday 06 April 2026 03:15:14 +0000 (0:00:01.432) 0:00:48.497 ********** 2026-04-06 03:15:18.246732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:18.246775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:18.246829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:18.246842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:18.246864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:23.783407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:23.783539 | orchestrator | 2026-04-06 03:15:23.783570 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-06 03:15:23.783582 | orchestrator | Monday 06 April 2026 03:15:18 +0000 (0:00:03.733) 0:00:52.230 ********** 2026-04-06 03:15:23.783595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:23.783607 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:23.783618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:23.783628 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:15:23.783638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:23.783648 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:15:23.783676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:23.783696 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:23.783712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:23.783722 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:23.783732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:23.783742 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:23.783766 | orchestrator | 2026-04-06 03:15:23.783784 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-06 03:15:23.783793 | orchestrator | Monday 06 April 2026 03:15:20 +0000 (0:00:02.466) 0:00:54.697 ********** 2026-04-06 03:15:23.783803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:23.783812 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:15:23.783828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:29.877700 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:29.877827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:29.877846 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:15:29.877858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:29.877869 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:29.877878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:29.877888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:29.877987 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:29.878002 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:29.878012 | orchestrator | 2026-04-06 03:15:29.878073 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-06 03:15:29.878084 | orchestrator | Monday 06 April 2026 03:15:23 +0000 (0:00:03.074) 0:00:57.771 ********** 2026-04-06 03:15:29.878093 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:29.878102 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:15:29.878110 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:29.878119 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:15:29.878128 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:29.878136 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:29.878145 | orchestrator | 2026-04-06 03:15:29.878154 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-06 03:15:29.878163 | orchestrator | Monday 06 April 2026 03:15:26 +0000 (0:00:02.706) 0:01:00.478 ********** 2026-04-06 03:15:29.878172 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:29.878180 | orchestrator | 2026-04-06 03:15:29.878189 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-06 03:15:29.878216 | orchestrator | Monday 06 April 2026 03:15:26 +0000 (0:00:00.155) 0:01:00.633 ********** 2026-04-06 03:15:29.878227 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:29.878237 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:15:29.878247 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:15:29.878257 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:29.878268 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:29.878277 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:29.878287 | orchestrator | 2026-04-06 03:15:29.878298 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-06 03:15:29.878308 | orchestrator | Monday 06 April 2026 03:15:27 +0000 (0:00:00.682) 0:01:01.316 ********** 2026-04-06 03:15:29.878327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:29.878339 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:29.878350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:29.878361 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:15:29.878371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:29.878392 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:15:29.878402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:29.878413 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:29.878431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:38.554360 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:38.554479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:38.554521 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:38.554533 | orchestrator | 2026-04-06 03:15:38.554544 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-06 03:15:38.554556 | orchestrator | Monday 06 April 2026 03:15:29 +0000 (0:00:02.542) 0:01:03.858 ********** 2026-04-06 03:15:38.554567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:38.554601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:38.554612 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:38.554648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:38.554660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:38.554670 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:38.554687 | orchestrator | 2026-04-06 03:15:38.554697 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-06 03:15:38.554707 | orchestrator | Monday 06 April 2026 03:15:33 +0000 (0:00:03.205) 0:01:07.064 ********** 2026-04-06 03:15:38.554717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:38.554727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:38.554751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:15:44.260728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:44.260849 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:44.260860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:15:44.260867 | orchestrator | 2026-04-06 03:15:44.260875 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-06 03:15:44.260883 | orchestrator | Monday 06 April 2026 03:15:38 +0000 (0:00:05.471) 0:01:12.536 ********** 2026-04-06 03:15:44.260890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:44.260897 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:15:44.261012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:44.261035 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:15:44.261042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:15:44.261048 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:15:44.261054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:44.261061 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:44.261067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:44.261073 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:44.261084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:15:44.261090 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:44.261096 | orchestrator | 2026-04-06 03:15:44.261103 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-06 03:15:44.261110 | orchestrator | Monday 06 April 2026 03:15:40 +0000 (0:00:02.242) 0:01:14.778 ********** 2026-04-06 03:15:44.261119 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:15:44.261123 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:15:44.261127 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:15:44.261130 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:15:44.261134 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:15:44.261138 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:15:44.261142 | orchestrator | 2026-04-06 03:15:44.261146 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-06 03:15:44.261155 | orchestrator | Monday 06 April 2026 03:15:44 +0000 (0:00:03.460) 0:01:18.239 ********** 2026-04-06 03:16:05.404385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:05.404478 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:05.404487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:05.404493 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:05.404499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:05.404504 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:05.404510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:16:05.404555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:16:05.404561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:16:05.404566 | orchestrator | 2026-04-06 03:16:05.404572 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-06 03:16:05.404577 | orchestrator | Monday 06 April 2026 03:15:48 +0000 (0:00:03.870) 0:01:22.109 ********** 2026-04-06 03:16:05.404582 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:05.404587 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:05.404591 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:05.404596 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:05.404601 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:05.404606 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:05.404610 | orchestrator | 2026-04-06 03:16:05.404615 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-06 03:16:05.404620 | orchestrator | Monday 06 April 2026 03:15:50 +0000 (0:00:02.508) 0:01:24.618 ********** 2026-04-06 03:16:05.404625 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:05.404629 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:05.404634 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:05.404639 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:05.404643 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:05.404648 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:05.404653 | orchestrator | 2026-04-06 03:16:05.404657 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-06 03:16:05.404662 | orchestrator | Monday 06 April 2026 03:15:53 +0000 (0:00:02.449) 0:01:27.067 ********** 2026-04-06 03:16:05.404667 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:05.404683 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:05.404689 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:05.404700 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:05.404705 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:05.404710 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:05.404714 | orchestrator | 2026-04-06 03:16:05.404719 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-06 03:16:05.404723 | orchestrator | Monday 06 April 2026 03:15:55 +0000 (0:00:02.443) 0:01:29.511 ********** 2026-04-06 03:16:05.404733 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:05.404737 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:05.404742 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:05.404747 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:05.404751 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:05.404756 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:05.404761 | orchestrator | 2026-04-06 03:16:05.404765 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-06 03:16:05.404770 | orchestrator | Monday 06 April 2026 03:15:57 +0000 (0:00:02.164) 0:01:31.676 ********** 2026-04-06 03:16:05.404775 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:05.404779 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:05.404784 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:05.404789 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:05.404793 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:05.404798 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:05.404802 | orchestrator | 2026-04-06 03:16:05.404807 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-06 03:16:05.404812 | orchestrator | Monday 06 April 2026 03:16:00 +0000 (0:00:02.539) 0:01:34.215 ********** 2026-04-06 03:16:05.404816 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:05.404821 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:05.404826 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:05.404830 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:05.404838 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:05.404845 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:05.404855 | orchestrator | 2026-04-06 03:16:05.404870 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-06 03:16:05.404879 | orchestrator | Monday 06 April 2026 03:16:02 +0000 (0:00:02.626) 0:01:36.842 ********** 2026-04-06 03:16:05.404887 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 03:16:05.404922 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:05.404930 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 03:16:05.404937 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:05.404945 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 03:16:05.404952 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:05.404960 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 03:16:05.404967 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:05.404981 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 03:16:10.286223 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:10.286331 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 03:16:10.286347 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:10.286358 | orchestrator | 2026-04-06 03:16:10.286369 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-06 03:16:10.286380 | orchestrator | Monday 06 April 2026 03:16:05 +0000 (0:00:02.538) 0:01:39.380 ********** 2026-04-06 03:16:10.286393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:16:10.286430 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:10.286442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:16:10.286453 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:10.286464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:16:10.286473 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:10.286498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:10.286509 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:10.286563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:10.286574 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:10.286583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:10.286603 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:10.286611 | orchestrator | 2026-04-06 03:16:10.286619 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-06 03:16:10.286627 | orchestrator | Monday 06 April 2026 03:16:07 +0000 (0:00:02.432) 0:01:41.813 ********** 2026-04-06 03:16:10.286636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:16:10.286646 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:10.286660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:16:10.286671 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:10.286690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:16:40.424495 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.424694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:40.424719 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:40.424732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:40.424743 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:40.424755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:40.424766 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:40.424777 | orchestrator | 2026-04-06 03:16:40.424790 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-06 03:16:40.424802 | orchestrator | Monday 06 April 2026 03:16:10 +0000 (0:00:02.457) 0:01:44.271 ********** 2026-04-06 03:16:40.424813 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:40.424824 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.424834 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.424845 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:40.424856 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:40.424906 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:40.424918 | orchestrator | 2026-04-06 03:16:40.424929 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-06 03:16:40.424957 | orchestrator | Monday 06 April 2026 03:16:12 +0000 (0:00:02.535) 0:01:46.806 ********** 2026-04-06 03:16:40.425010 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.425024 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:40.425037 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.425049 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:16:40.425062 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:16:40.425074 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:16:40.425086 | orchestrator | 2026-04-06 03:16:40.425099 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-06 03:16:40.425111 | orchestrator | Monday 06 April 2026 03:16:16 +0000 (0:00:04.005) 0:01:50.812 ********** 2026-04-06 03:16:40.425136 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.425149 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:40.425161 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.425173 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:40.425185 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:40.425197 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:40.425209 | orchestrator | 2026-04-06 03:16:40.425222 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-06 03:16:40.425241 | orchestrator | Monday 06 April 2026 03:16:19 +0000 (0:00:02.417) 0:01:53.230 ********** 2026-04-06 03:16:40.425260 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.425285 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:40.425362 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:40.425380 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.425397 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:40.425412 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:40.425429 | orchestrator | 2026-04-06 03:16:40.425447 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-06 03:16:40.425490 | orchestrator | Monday 06 April 2026 03:16:21 +0000 (0:00:02.641) 0:01:55.872 ********** 2026-04-06 03:16:40.425510 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.425527 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:40.425544 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.425560 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:40.425579 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:40.425597 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:40.425616 | orchestrator | 2026-04-06 03:16:40.425632 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-06 03:16:40.425643 | orchestrator | Monday 06 April 2026 03:16:24 +0000 (0:00:02.609) 0:01:58.481 ********** 2026-04-06 03:16:40.425654 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.425704 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:40.425716 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.425727 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:40.425738 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:40.425749 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:40.425760 | orchestrator | 2026-04-06 03:16:40.425771 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-06 03:16:40.425782 | orchestrator | Monday 06 April 2026 03:16:27 +0000 (0:00:02.682) 0:02:01.164 ********** 2026-04-06 03:16:40.425793 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.425804 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:40.425814 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.425826 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:40.425836 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:40.425847 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:40.425858 | orchestrator | 2026-04-06 03:16:40.425912 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-06 03:16:40.425923 | orchestrator | Monday 06 April 2026 03:16:29 +0000 (0:00:02.571) 0:02:03.736 ********** 2026-04-06 03:16:40.425934 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.425945 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:40.425957 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:40.425968 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.425978 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:40.425989 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:40.426000 | orchestrator | 2026-04-06 03:16:40.426011 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-06 03:16:40.426122 | orchestrator | Monday 06 April 2026 03:16:32 +0000 (0:00:02.570) 0:02:06.307 ********** 2026-04-06 03:16:40.426133 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.426145 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.426155 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:40.426180 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:40.426191 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:40.426202 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:40.426212 | orchestrator | 2026-04-06 03:16:40.426223 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-06 03:16:40.426235 | orchestrator | Monday 06 April 2026 03:16:35 +0000 (0:00:03.085) 0:02:09.392 ********** 2026-04-06 03:16:40.426246 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 03:16:40.426258 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.426269 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 03:16:40.426280 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:40.426291 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 03:16:40.426302 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:40.426313 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 03:16:40.426325 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:40.426367 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 03:16:40.426378 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:40.426390 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 03:16:40.426401 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:40.426412 | orchestrator | 2026-04-06 03:16:40.426432 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-06 03:16:40.426444 | orchestrator | Monday 06 April 2026 03:16:37 +0000 (0:00:02.199) 0:02:11.592 ********** 2026-04-06 03:16:40.426457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:16:40.426471 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:16:40.426501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:16:43.228933 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:16:43.229048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-06 03:16:43.229117 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:16:43.229132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:43.229145 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:16:43.229172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:43.229184 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:16:43.229196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 03:16:43.229208 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:16:43.229220 | orchestrator | 2026-04-06 03:16:43.229232 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-06 03:16:43.229245 | orchestrator | Monday 06 April 2026 03:16:40 +0000 (0:00:02.814) 0:02:14.407 ********** 2026-04-06 03:16:43.229278 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:16:43.229302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:16:43.229314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:16:43.229331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-06 03:16:43.229343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:16:43.229387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 03:19:03.332387 | orchestrator | 2026-04-06 03:19:03.332491 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-06 03:19:03.332503 | orchestrator | Monday 06 April 2026 03:16:43 +0000 (0:00:02.807) 0:02:17.215 ********** 2026-04-06 03:19:03.332510 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:19:03.332518 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:19:03.332524 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:19:03.332531 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:19:03.332537 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:19:03.332543 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:19:03.332550 | orchestrator | 2026-04-06 03:19:03.332556 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-06 03:19:03.332563 | orchestrator | Monday 06 April 2026 03:16:44 +0000 (0:00:00.924) 0:02:18.139 ********** 2026-04-06 03:19:03.332569 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:19:03.332576 | orchestrator | 2026-04-06 03:19:03.332582 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-06 03:19:03.332589 | orchestrator | Monday 06 April 2026 03:16:46 +0000 (0:00:02.253) 0:02:20.393 ********** 2026-04-06 03:19:03.332597 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:19:03.332604 | orchestrator | 2026-04-06 03:19:03.332610 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-06 03:19:03.332617 | orchestrator | Monday 06 April 2026 03:16:48 +0000 (0:00:02.304) 0:02:22.697 ********** 2026-04-06 03:19:03.332623 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:19:03.332630 | orchestrator | 2026-04-06 03:19:03.332637 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 03:19:03.332643 | orchestrator | Monday 06 April 2026 03:17:32 +0000 (0:00:44.229) 0:03:06.927 ********** 2026-04-06 03:19:03.332647 | orchestrator | 2026-04-06 03:19:03.332652 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 03:19:03.332656 | orchestrator | Monday 06 April 2026 03:17:33 +0000 (0:00:00.079) 0:03:07.006 ********** 2026-04-06 03:19:03.332660 | orchestrator | 2026-04-06 03:19:03.332664 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 03:19:03.332669 | orchestrator | Monday 06 April 2026 03:17:33 +0000 (0:00:00.080) 0:03:07.087 ********** 2026-04-06 03:19:03.332672 | orchestrator | 2026-04-06 03:19:03.332676 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 03:19:03.332680 | orchestrator | Monday 06 April 2026 03:17:33 +0000 (0:00:00.076) 0:03:07.163 ********** 2026-04-06 03:19:03.332684 | orchestrator | 2026-04-06 03:19:03.332687 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 03:19:03.332704 | orchestrator | Monday 06 April 2026 03:17:33 +0000 (0:00:00.084) 0:03:07.247 ********** 2026-04-06 03:19:03.332708 | orchestrator | 2026-04-06 03:19:03.332712 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 03:19:03.332716 | orchestrator | Monday 06 April 2026 03:17:33 +0000 (0:00:00.078) 0:03:07.326 ********** 2026-04-06 03:19:03.332720 | orchestrator | 2026-04-06 03:19:03.332724 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-06 03:19:03.332727 | orchestrator | Monday 06 April 2026 03:17:33 +0000 (0:00:00.083) 0:03:07.410 ********** 2026-04-06 03:19:03.332731 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:19:03.332751 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:19:03.332805 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:19:03.332810 | orchestrator | 2026-04-06 03:19:03.332814 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-06 03:19:03.332818 | orchestrator | Monday 06 April 2026 03:18:00 +0000 (0:00:26.642) 0:03:34.052 ********** 2026-04-06 03:19:03.332821 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:19:03.332825 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:19:03.332829 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:19:03.332833 | orchestrator | 2026-04-06 03:19:03.332837 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:19:03.332843 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-06 03:19:03.332849 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-06 03:19:03.332854 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-06 03:19:03.332861 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-06 03:19:03.332866 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-06 03:19:03.332872 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-06 03:19:03.332877 | orchestrator | 2026-04-06 03:19:03.332884 | orchestrator | 2026-04-06 03:19:03.332889 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:19:03.332896 | orchestrator | Monday 06 April 2026 03:19:02 +0000 (0:01:02.711) 0:04:36.763 ********** 2026-04-06 03:19:03.332901 | orchestrator | =============================================================================== 2026-04-06 03:19:03.332904 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.71s 2026-04-06 03:19:03.332908 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 44.23s 2026-04-06 03:19:03.332912 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.64s 2026-04-06 03:19:03.332931 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.47s 2026-04-06 03:19:03.332938 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.48s 2026-04-06 03:19:03.332944 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.47s 2026-04-06 03:19:03.332950 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.01s 2026-04-06 03:19:03.332956 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.95s 2026-04-06 03:19:03.332962 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.92s 2026-04-06 03:19:03.332968 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.87s 2026-04-06 03:19:03.332974 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.73s 2026-04-06 03:19:03.332980 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.46s 2026-04-06 03:19:03.332987 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.29s 2026-04-06 03:19:03.332993 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.21s 2026-04-06 03:19:03.332999 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 3.09s 2026-04-06 03:19:03.333006 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.07s 2026-04-06 03:19:03.333012 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.83s 2026-04-06 03:19:03.333026 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.81s 2026-04-06 03:19:03.333034 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 2.81s 2026-04-06 03:19:03.333040 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.81s 2026-04-06 03:19:05.931960 | orchestrator | 2026-04-06 03:19:05 | INFO  | Task aa03a145-524c-465b-95a0-83acf7f1f19e (nova) was prepared for execution. 2026-04-06 03:19:05.932558 | orchestrator | 2026-04-06 03:19:05 | INFO  | It takes a moment until task aa03a145-524c-465b-95a0-83acf7f1f19e (nova) has been started and output is visible here. 2026-04-06 03:21:06.256674 | orchestrator | 2026-04-06 03:21:06.256921 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:21:06.256953 | orchestrator | 2026-04-06 03:21:06.256993 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-06 03:21:06.257006 | orchestrator | Monday 06 April 2026 03:19:10 +0000 (0:00:00.306) 0:00:00.306 ********** 2026-04-06 03:21:06.257018 | orchestrator | changed: [testbed-manager] 2026-04-06 03:21:06.257033 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.257051 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:21:06.257082 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:21:06.257100 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:21:06.257118 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:21:06.257136 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:21:06.257155 | orchestrator | 2026-04-06 03:21:06.257172 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:21:06.257189 | orchestrator | Monday 06 April 2026 03:19:11 +0000 (0:00:00.925) 0:00:01.232 ********** 2026-04-06 03:21:06.257207 | orchestrator | changed: [testbed-manager] 2026-04-06 03:21:06.257227 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.257246 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:21:06.257264 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:21:06.257282 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:21:06.257300 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:21:06.257320 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:21:06.257340 | orchestrator | 2026-04-06 03:21:06.257356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:21:06.257373 | orchestrator | Monday 06 April 2026 03:19:12 +0000 (0:00:00.934) 0:00:02.167 ********** 2026-04-06 03:21:06.257393 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-06 03:21:06.257412 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-06 03:21:06.257433 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-06 03:21:06.257452 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-06 03:21:06.257471 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-06 03:21:06.257490 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-06 03:21:06.257509 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-06 03:21:06.257528 | orchestrator | 2026-04-06 03:21:06.257548 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-06 03:21:06.257566 | orchestrator | 2026-04-06 03:21:06.257584 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-06 03:21:06.257598 | orchestrator | Monday 06 April 2026 03:19:13 +0000 (0:00:00.796) 0:00:02.963 ********** 2026-04-06 03:21:06.257609 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:21:06.257620 | orchestrator | 2026-04-06 03:21:06.257631 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-06 03:21:06.257643 | orchestrator | Monday 06 April 2026 03:19:14 +0000 (0:00:00.860) 0:00:03.824 ********** 2026-04-06 03:21:06.257654 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-06 03:21:06.257666 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-06 03:21:06.257741 | orchestrator | 2026-04-06 03:21:06.257755 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-06 03:21:06.257766 | orchestrator | Monday 06 April 2026 03:19:18 +0000 (0:00:04.235) 0:00:08.059 ********** 2026-04-06 03:21:06.257777 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-06 03:21:06.257788 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-06 03:21:06.257799 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.257810 | orchestrator | 2026-04-06 03:21:06.257821 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-06 03:21:06.257832 | orchestrator | Monday 06 April 2026 03:19:22 +0000 (0:00:04.391) 0:00:12.451 ********** 2026-04-06 03:21:06.257843 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.257854 | orchestrator | 2026-04-06 03:21:06.257865 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-06 03:21:06.257876 | orchestrator | Monday 06 April 2026 03:19:23 +0000 (0:00:00.713) 0:00:13.164 ********** 2026-04-06 03:21:06.257887 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.257898 | orchestrator | 2026-04-06 03:21:06.257909 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-06 03:21:06.257920 | orchestrator | Monday 06 April 2026 03:19:24 +0000 (0:00:01.242) 0:00:14.407 ********** 2026-04-06 03:21:06.257931 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.257942 | orchestrator | 2026-04-06 03:21:06.257953 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-06 03:21:06.257963 | orchestrator | Monday 06 April 2026 03:19:27 +0000 (0:00:02.699) 0:00:17.107 ********** 2026-04-06 03:21:06.257974 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:21:06.257985 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.257996 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.258006 | orchestrator | 2026-04-06 03:21:06.258078 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-06 03:21:06.258091 | orchestrator | Monday 06 April 2026 03:19:27 +0000 (0:00:00.327) 0:00:17.434 ********** 2026-04-06 03:21:06.258102 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:21:06.258113 | orchestrator | 2026-04-06 03:21:06.258124 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-06 03:21:06.258135 | orchestrator | Monday 06 April 2026 03:20:00 +0000 (0:00:32.506) 0:00:49.941 ********** 2026-04-06 03:21:06.258146 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.258157 | orchestrator | 2026-04-06 03:21:06.258168 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-06 03:21:06.258179 | orchestrator | Monday 06 April 2026 03:20:14 +0000 (0:00:14.661) 0:01:04.602 ********** 2026-04-06 03:21:06.258190 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:21:06.258201 | orchestrator | 2026-04-06 03:21:06.258212 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-06 03:21:06.258223 | orchestrator | Monday 06 April 2026 03:20:26 +0000 (0:00:11.916) 0:01:16.519 ********** 2026-04-06 03:21:06.258257 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:21:06.258269 | orchestrator | 2026-04-06 03:21:06.258280 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-06 03:21:06.258301 | orchestrator | Monday 06 April 2026 03:20:27 +0000 (0:00:00.728) 0:01:17.247 ********** 2026-04-06 03:21:06.258312 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:21:06.258324 | orchestrator | 2026-04-06 03:21:06.258334 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-06 03:21:06.258345 | orchestrator | Monday 06 April 2026 03:20:28 +0000 (0:00:00.518) 0:01:17.766 ********** 2026-04-06 03:21:06.258357 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:21:06.258368 | orchestrator | 2026-04-06 03:21:06.258379 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-06 03:21:06.258390 | orchestrator | Monday 06 April 2026 03:20:28 +0000 (0:00:00.802) 0:01:18.568 ********** 2026-04-06 03:21:06.258411 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:21:06.258422 | orchestrator | 2026-04-06 03:21:06.258433 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-06 03:21:06.258444 | orchestrator | Monday 06 April 2026 03:20:46 +0000 (0:00:17.875) 0:01:36.444 ********** 2026-04-06 03:21:06.258455 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:21:06.258466 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.258477 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.258488 | orchestrator | 2026-04-06 03:21:06.258499 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-06 03:21:06.258509 | orchestrator | 2026-04-06 03:21:06.258520 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-06 03:21:06.258531 | orchestrator | Monday 06 April 2026 03:20:47 +0000 (0:00:00.364) 0:01:36.809 ********** 2026-04-06 03:21:06.258543 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:21:06.258554 | orchestrator | 2026-04-06 03:21:06.258565 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-06 03:21:06.258576 | orchestrator | Monday 06 April 2026 03:20:48 +0000 (0:00:00.890) 0:01:37.699 ********** 2026-04-06 03:21:06.258587 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.258597 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.258608 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.258619 | orchestrator | 2026-04-06 03:21:06.258630 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-06 03:21:06.258641 | orchestrator | Monday 06 April 2026 03:20:50 +0000 (0:00:02.022) 0:01:39.721 ********** 2026-04-06 03:21:06.258652 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.258663 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.258674 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.258684 | orchestrator | 2026-04-06 03:21:06.258723 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-06 03:21:06.258743 | orchestrator | Monday 06 April 2026 03:20:52 +0000 (0:00:02.098) 0:01:41.820 ********** 2026-04-06 03:21:06.258761 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:21:06.258779 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.258798 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.258816 | orchestrator | 2026-04-06 03:21:06.258833 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-06 03:21:06.258849 | orchestrator | Monday 06 April 2026 03:20:52 +0000 (0:00:00.590) 0:01:42.411 ********** 2026-04-06 03:21:06.258865 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-06 03:21:06.258882 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.258901 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-06 03:21:06.258918 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.258937 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-06 03:21:06.258955 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-06 03:21:06.258974 | orchestrator | 2026-04-06 03:21:06.258992 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-06 03:21:06.259007 | orchestrator | Monday 06 April 2026 03:21:00 +0000 (0:00:07.598) 0:01:50.010 ********** 2026-04-06 03:21:06.259018 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:21:06.259030 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.259040 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.259051 | orchestrator | 2026-04-06 03:21:06.259063 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-06 03:21:06.259074 | orchestrator | Monday 06 April 2026 03:21:00 +0000 (0:00:00.392) 0:01:50.402 ********** 2026-04-06 03:21:06.259084 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-06 03:21:06.259095 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:21:06.259106 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-06 03:21:06.259117 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.259138 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-06 03:21:06.259149 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.259159 | orchestrator | 2026-04-06 03:21:06.259170 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-06 03:21:06.259181 | orchestrator | Monday 06 April 2026 03:21:01 +0000 (0:00:01.206) 0:01:51.609 ********** 2026-04-06 03:21:06.259192 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.259203 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.259214 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.259224 | orchestrator | 2026-04-06 03:21:06.259235 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-06 03:21:06.259246 | orchestrator | Monday 06 April 2026 03:21:02 +0000 (0:00:00.542) 0:01:52.151 ********** 2026-04-06 03:21:06.259257 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.259268 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.259279 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:21:06.259290 | orchestrator | 2026-04-06 03:21:06.259300 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-06 03:21:06.259311 | orchestrator | Monday 06 April 2026 03:21:03 +0000 (0:00:01.013) 0:01:53.165 ********** 2026-04-06 03:21:06.259322 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:21:06.259333 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:21:06.259354 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:22:25.767237 | orchestrator | 2026-04-06 03:22:25.767348 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-06 03:22:25.767363 | orchestrator | Monday 06 April 2026 03:21:06 +0000 (0:00:02.745) 0:01:55.911 ********** 2026-04-06 03:22:25.767388 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:25.767407 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:25.767416 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:22:25.767426 | orchestrator | 2026-04-06 03:22:25.767435 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-06 03:22:25.767444 | orchestrator | Monday 06 April 2026 03:21:28 +0000 (0:00:21.807) 0:02:17.718 ********** 2026-04-06 03:22:25.767454 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:25.767463 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:25.767471 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:22:25.767480 | orchestrator | 2026-04-06 03:22:25.767489 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-06 03:22:25.767498 | orchestrator | Monday 06 April 2026 03:21:40 +0000 (0:00:12.386) 0:02:30.105 ********** 2026-04-06 03:22:25.767507 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:22:25.767516 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:25.767525 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:25.767534 | orchestrator | 2026-04-06 03:22:25.767543 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-06 03:22:25.767552 | orchestrator | Monday 06 April 2026 03:21:41 +0000 (0:00:01.388) 0:02:31.493 ********** 2026-04-06 03:22:25.767561 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:25.767570 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:25.767579 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:22:25.767588 | orchestrator | 2026-04-06 03:22:25.767597 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-06 03:22:25.767606 | orchestrator | Monday 06 April 2026 03:21:54 +0000 (0:00:12.758) 0:02:44.251 ********** 2026-04-06 03:22:25.767615 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:22:25.767624 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:25.767633 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:25.767641 | orchestrator | 2026-04-06 03:22:25.767650 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-06 03:22:25.767659 | orchestrator | Monday 06 April 2026 03:21:55 +0000 (0:00:01.209) 0:02:45.461 ********** 2026-04-06 03:22:25.767754 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:22:25.767764 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:25.767798 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:25.767809 | orchestrator | 2026-04-06 03:22:25.767819 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-06 03:22:25.767829 | orchestrator | 2026-04-06 03:22:25.767840 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-06 03:22:25.767850 | orchestrator | Monday 06 April 2026 03:21:56 +0000 (0:00:00.326) 0:02:45.787 ********** 2026-04-06 03:22:25.767859 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:22:25.767870 | orchestrator | 2026-04-06 03:22:25.767881 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-06 03:22:25.767891 | orchestrator | Monday 06 April 2026 03:21:56 +0000 (0:00:00.835) 0:02:46.622 ********** 2026-04-06 03:22:25.767901 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-06 03:22:25.767911 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-06 03:22:25.767921 | orchestrator | 2026-04-06 03:22:25.767931 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-06 03:22:25.767942 | orchestrator | Monday 06 April 2026 03:22:00 +0000 (0:00:03.297) 0:02:49.920 ********** 2026-04-06 03:22:25.767952 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-06 03:22:25.768005 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-06 03:22:25.768017 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-06 03:22:25.768028 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-06 03:22:25.768048 | orchestrator | 2026-04-06 03:22:25.768059 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-06 03:22:25.768069 | orchestrator | Monday 06 April 2026 03:22:06 +0000 (0:00:06.331) 0:02:56.251 ********** 2026-04-06 03:22:25.768079 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:22:25.768089 | orchestrator | 2026-04-06 03:22:25.768099 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-06 03:22:25.768108 | orchestrator | Monday 06 April 2026 03:22:09 +0000 (0:00:03.155) 0:02:59.406 ********** 2026-04-06 03:22:25.768119 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:22:25.768129 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-06 03:22:25.768140 | orchestrator | 2026-04-06 03:22:25.768149 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-06 03:22:25.768157 | orchestrator | Monday 06 April 2026 03:22:13 +0000 (0:00:03.752) 0:03:03.159 ********** 2026-04-06 03:22:25.768166 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:22:25.768175 | orchestrator | 2026-04-06 03:22:25.768184 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-06 03:22:25.768192 | orchestrator | Monday 06 April 2026 03:22:16 +0000 (0:00:03.228) 0:03:06.387 ********** 2026-04-06 03:22:25.768201 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-06 03:22:25.768210 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-06 03:22:25.768219 | orchestrator | 2026-04-06 03:22:25.768227 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-06 03:22:25.768255 | orchestrator | Monday 06 April 2026 03:22:24 +0000 (0:00:07.660) 0:03:14.048 ********** 2026-04-06 03:22:25.768281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:25.768305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:25.768316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:25.768336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:30.640887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:30.640980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:30.640987 | orchestrator | 2026-04-06 03:22:30.640992 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-06 03:22:30.640997 | orchestrator | Monday 06 April 2026 03:22:25 +0000 (0:00:01.369) 0:03:15.418 ********** 2026-04-06 03:22:30.641001 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:22:30.641006 | orchestrator | 2026-04-06 03:22:30.641010 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-06 03:22:30.641014 | orchestrator | Monday 06 April 2026 03:22:25 +0000 (0:00:00.139) 0:03:15.558 ********** 2026-04-06 03:22:30.641018 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:22:30.641022 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:30.641025 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:30.641029 | orchestrator | 2026-04-06 03:22:30.641033 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-06 03:22:30.641037 | orchestrator | Monday 06 April 2026 03:22:26 +0000 (0:00:00.321) 0:03:15.880 ********** 2026-04-06 03:22:30.641041 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:22:30.641045 | orchestrator | 2026-04-06 03:22:30.641048 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-06 03:22:30.641052 | orchestrator | Monday 06 April 2026 03:22:27 +0000 (0:00:00.801) 0:03:16.681 ********** 2026-04-06 03:22:30.641056 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:22:30.641060 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:30.641064 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:30.641067 | orchestrator | 2026-04-06 03:22:30.641071 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-06 03:22:30.641075 | orchestrator | Monday 06 April 2026 03:22:27 +0000 (0:00:00.562) 0:03:17.244 ********** 2026-04-06 03:22:30.641079 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:22:30.641084 | orchestrator | 2026-04-06 03:22:30.641087 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-06 03:22:30.641092 | orchestrator | Monday 06 April 2026 03:22:28 +0000 (0:00:00.621) 0:03:17.866 ********** 2026-04-06 03:22:30.641098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:30.641130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:30.641136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:30.641141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:30.641145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:30.641155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:30.641159 | orchestrator | 2026-04-06 03:22:30.641166 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-06 03:22:32.547756 | orchestrator | Monday 06 April 2026 03:22:30 +0000 (0:00:02.431) 0:03:20.298 ********** 2026-04-06 03:22:32.547880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 03:22:32.547898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:22:32.547906 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:22:32.547915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 03:22:32.547942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:22:32.547949 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:32.547987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 03:22:32.548001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:22:32.548008 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:32.548016 | orchestrator | 2026-04-06 03:22:32.548025 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-06 03:22:32.548033 | orchestrator | Monday 06 April 2026 03:22:31 +0000 (0:00:00.928) 0:03:21.227 ********** 2026-04-06 03:22:32.548041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 03:22:32.548057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:22:32.548064 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:22:32.548081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 03:22:34.909942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:22:34.910086 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:34.910103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 03:22:34.910138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:22:34.910147 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:34.910156 | orchestrator | 2026-04-06 03:22:34.910166 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-06 03:22:34.910175 | orchestrator | Monday 06 April 2026 03:22:32 +0000 (0:00:00.978) 0:03:22.205 ********** 2026-04-06 03:22:34.910196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:34.910223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:34.910234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:34.910251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:34.910275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:34.910291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:41.926101 | orchestrator | 2026-04-06 03:22:41.926201 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-06 03:22:41.926214 | orchestrator | Monday 06 April 2026 03:22:34 +0000 (0:00:02.358) 0:03:24.564 ********** 2026-04-06 03:22:41.926228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:41.926267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:41.926291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:41.926317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:41.926331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:41.926346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:41.926370 | orchestrator | 2026-04-06 03:22:41.926383 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-06 03:22:41.926396 | orchestrator | Monday 06 April 2026 03:22:41 +0000 (0:00:06.354) 0:03:30.919 ********** 2026-04-06 03:22:41.926411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 03:22:41.926432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:22:41.926443 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:22:41.926463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 03:22:46.306614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:22:46.306778 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:46.306800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-06 03:22:46.306833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:22:46.306846 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:46.306866 | orchestrator | 2026-04-06 03:22:46.306888 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-06 03:22:46.306911 | orchestrator | Monday 06 April 2026 03:22:41 +0000 (0:00:00.663) 0:03:31.583 ********** 2026-04-06 03:22:46.306931 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:22:46.306951 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:22:46.306971 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:22:46.306990 | orchestrator | 2026-04-06 03:22:46.307009 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-06 03:22:46.307030 | orchestrator | Monday 06 April 2026 03:22:43 +0000 (0:00:01.555) 0:03:33.138 ********** 2026-04-06 03:22:46.307050 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:22:46.307070 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:22:46.307091 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:22:46.307110 | orchestrator | 2026-04-06 03:22:46.307131 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-06 03:22:46.307154 | orchestrator | Monday 06 April 2026 03:22:43 +0000 (0:00:00.326) 0:03:33.464 ********** 2026-04-06 03:22:46.307204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:46.307261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:46.307298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-06 03:22:46.307323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:46.307356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:22:46.307387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:28.673985 | orchestrator | 2026-04-06 03:23:28.674174 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-06 03:23:28.674222 | orchestrator | Monday 06 April 2026 03:22:45 +0000 (0:00:02.057) 0:03:35.522 ********** 2026-04-06 03:23:28.674233 | orchestrator | 2026-04-06 03:23:28.674244 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-06 03:23:28.674255 | orchestrator | Monday 06 April 2026 03:22:46 +0000 (0:00:00.146) 0:03:35.669 ********** 2026-04-06 03:23:28.674265 | orchestrator | 2026-04-06 03:23:28.674276 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-06 03:23:28.674298 | orchestrator | Monday 06 April 2026 03:22:46 +0000 (0:00:00.141) 0:03:35.810 ********** 2026-04-06 03:23:28.674317 | orchestrator | 2026-04-06 03:23:28.674329 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-06 03:23:28.674339 | orchestrator | Monday 06 April 2026 03:22:46 +0000 (0:00:00.145) 0:03:35.956 ********** 2026-04-06 03:23:28.674349 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:23:28.674361 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:23:28.674372 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:23:28.674381 | orchestrator | 2026-04-06 03:23:28.674392 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-06 03:23:28.674402 | orchestrator | Monday 06 April 2026 03:23:05 +0000 (0:00:18.805) 0:03:54.761 ********** 2026-04-06 03:23:28.674412 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:23:28.674422 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:23:28.674433 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:23:28.674442 | orchestrator | 2026-04-06 03:23:28.674453 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-06 03:23:28.674463 | orchestrator | 2026-04-06 03:23:28.674473 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 03:23:28.674483 | orchestrator | Monday 06 April 2026 03:23:15 +0000 (0:00:10.486) 0:04:05.248 ********** 2026-04-06 03:23:28.674496 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:23:28.674507 | orchestrator | 2026-04-06 03:23:28.674517 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 03:23:28.674528 | orchestrator | Monday 06 April 2026 03:23:16 +0000 (0:00:01.284) 0:04:06.533 ********** 2026-04-06 03:23:28.674540 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:23:28.674570 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:23:28.674582 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:23:28.674593 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:23:28.674605 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:23:28.674670 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:23:28.674688 | orchestrator | 2026-04-06 03:23:28.674695 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-06 03:23:28.674703 | orchestrator | Monday 06 April 2026 03:23:17 +0000 (0:00:00.837) 0:04:07.371 ********** 2026-04-06 03:23:28.674710 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:23:28.674718 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:23:28.674726 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:23:28.674737 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:23:28.674748 | orchestrator | 2026-04-06 03:23:28.674759 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-06 03:23:28.674770 | orchestrator | Monday 06 April 2026 03:23:18 +0000 (0:00:00.969) 0:04:08.340 ********** 2026-04-06 03:23:28.674781 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-06 03:23:28.674792 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-06 03:23:28.674803 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-06 03:23:28.674813 | orchestrator | 2026-04-06 03:23:28.674824 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-06 03:23:28.674835 | orchestrator | Monday 06 April 2026 03:23:19 +0000 (0:00:00.971) 0:04:09.312 ********** 2026-04-06 03:23:28.674846 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-06 03:23:28.674856 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-06 03:23:28.674868 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-06 03:23:28.674879 | orchestrator | 2026-04-06 03:23:28.674889 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-06 03:23:28.674901 | orchestrator | Monday 06 April 2026 03:23:20 +0000 (0:00:01.183) 0:04:10.495 ********** 2026-04-06 03:23:28.674912 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-06 03:23:28.674923 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:23:28.674935 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-06 03:23:28.674941 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:23:28.674948 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-06 03:23:28.674954 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:23:28.674960 | orchestrator | 2026-04-06 03:23:28.674966 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-06 03:23:28.674973 | orchestrator | Monday 06 April 2026 03:23:21 +0000 (0:00:00.587) 0:04:11.083 ********** 2026-04-06 03:23:28.674981 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-06 03:23:28.674990 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-06 03:23:28.675001 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 03:23:28.675010 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 03:23:28.675026 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:23:28.675036 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 03:23:28.675046 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 03:23:28.675056 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:23:28.675088 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-06 03:23:28.675098 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 03:23:28.675108 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 03:23:28.675118 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:23:28.675128 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-06 03:23:28.675138 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-06 03:23:28.675160 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-06 03:23:28.675171 | orchestrator | 2026-04-06 03:23:28.675181 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-06 03:23:28.675191 | orchestrator | Monday 06 April 2026 03:23:23 +0000 (0:00:02.019) 0:04:13.103 ********** 2026-04-06 03:23:28.675202 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:23:28.675209 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:23:28.675215 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:23:28.675221 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:23:28.675227 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:23:28.675234 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:23:28.675240 | orchestrator | 2026-04-06 03:23:28.675246 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-06 03:23:28.675252 | orchestrator | Monday 06 April 2026 03:23:24 +0000 (0:00:01.257) 0:04:14.360 ********** 2026-04-06 03:23:28.675259 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:23:28.675265 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:23:28.675271 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:23:28.675277 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:23:28.675284 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:23:28.675290 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:23:28.675296 | orchestrator | 2026-04-06 03:23:28.675302 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-06 03:23:28.675308 | orchestrator | Monday 06 April 2026 03:23:26 +0000 (0:00:01.987) 0:04:16.348 ********** 2026-04-06 03:23:28.675324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:23:28.675337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:23:28.675350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472368 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:30.472496 | orchestrator | 2026-04-06 03:23:30.472504 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 03:23:30.472512 | orchestrator | Monday 06 April 2026 03:23:29 +0000 (0:00:02.343) 0:04:18.691 ********** 2026-04-06 03:23:30.472520 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:23:30.472529 | orchestrator | 2026-04-06 03:23:30.472535 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-06 03:23:30.472546 | orchestrator | Monday 06 April 2026 03:23:30 +0000 (0:00:01.438) 0:04:20.130 ********** 2026-04-06 03:23:33.868814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:23:33.868955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:23:33.868972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:23:33.868984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:23:33.869013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:23:33.869043 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:23:33.869054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:23:33.869071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:23:33.869081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:23:33.869090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:33.869100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:33.869124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:35.674717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:35.674814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:35.674845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:23:35.674856 | orchestrator | 2026-04-06 03:23:35.674865 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-06 03:23:35.674875 | orchestrator | Monday 06 April 2026 03:23:34 +0000 (0:00:03.797) 0:04:23.927 ********** 2026-04-06 03:23:35.674885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:23:35.674919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:23:35.674944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 03:23:35.674953 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:23:35.674963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:23:35.674977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:23:35.674986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 03:23:35.675001 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:23:35.675009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:23:35.675025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:23:37.739293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 03:23:37.739406 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:23:37.739441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 03:23:37.739454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 03:23:37.739487 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:23:37.739499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 03:23:37.739511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 03:23:37.739522 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:23:37.739534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 03:23:37.739577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 03:23:37.739598 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:23:37.739617 | orchestrator | 2026-04-06 03:23:37.739638 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-06 03:23:37.739771 | orchestrator | Monday 06 April 2026 03:23:36 +0000 (0:00:01.741) 0:04:25.668 ********** 2026-04-06 03:23:37.739805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:23:37.739842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:23:37.739863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 03:23:37.739877 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:23:37.739890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:23:37.739917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:23:42.368394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 03:23:42.368543 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:23:42.368597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:23:42.368719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:23:42.368736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 03:23:42.368748 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:23:42.368761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 03:23:42.368797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 03:23:42.368809 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:23:42.368828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 03:23:42.368850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 03:23:42.368862 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:23:42.368873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 03:23:42.368885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 03:23:42.368896 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:23:42.368907 | orchestrator | 2026-04-06 03:23:42.368920 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 03:23:42.368932 | orchestrator | Monday 06 April 2026 03:23:38 +0000 (0:00:02.462) 0:04:28.130 ********** 2026-04-06 03:23:42.368943 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:23:42.368954 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:23:42.368965 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:23:42.368976 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:23:42.368987 | orchestrator | 2026-04-06 03:23:42.368999 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-06 03:23:42.369009 | orchestrator | Monday 06 April 2026 03:23:39 +0000 (0:00:00.991) 0:04:29.121 ********** 2026-04-06 03:23:42.369020 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 03:23:42.369031 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 03:23:42.369042 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 03:23:42.369052 | orchestrator | 2026-04-06 03:23:42.369064 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-06 03:23:42.369075 | orchestrator | Monday 06 April 2026 03:23:40 +0000 (0:00:01.273) 0:04:30.395 ********** 2026-04-06 03:23:42.369085 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 03:23:42.369096 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 03:23:42.369107 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 03:23:42.369118 | orchestrator | 2026-04-06 03:23:42.369129 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-06 03:23:42.369140 | orchestrator | Monday 06 April 2026 03:23:41 +0000 (0:00:01.038) 0:04:31.433 ********** 2026-04-06 03:23:42.369150 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:23:42.369161 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:23:42.369179 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:23:42.369190 | orchestrator | 2026-04-06 03:23:42.369207 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-06 03:24:04.779415 | orchestrator | Monday 06 April 2026 03:23:42 +0000 (0:00:00.591) 0:04:32.025 ********** 2026-04-06 03:24:04.779536 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:24:04.779549 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:24:04.779557 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:24:04.779565 | orchestrator | 2026-04-06 03:24:04.779574 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-06 03:24:04.779581 | orchestrator | Monday 06 April 2026 03:23:42 +0000 (0:00:00.533) 0:04:32.558 ********** 2026-04-06 03:24:04.779589 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-06 03:24:04.779597 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-06 03:24:04.779615 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-06 03:24:04.779682 | orchestrator | 2026-04-06 03:24:04.779693 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-06 03:24:04.779701 | orchestrator | Monday 06 April 2026 03:23:44 +0000 (0:00:01.454) 0:04:34.012 ********** 2026-04-06 03:24:04.779709 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-06 03:24:04.779717 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-06 03:24:04.779739 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-06 03:24:04.779747 | orchestrator | 2026-04-06 03:24:04.779755 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-06 03:24:04.779762 | orchestrator | Monday 06 April 2026 03:23:45 +0000 (0:00:01.206) 0:04:35.219 ********** 2026-04-06 03:24:04.779770 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-06 03:24:04.779778 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-06 03:24:04.779790 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-06 03:24:04.779803 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-06 03:24:04.779815 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-06 03:24:04.779826 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-06 03:24:04.779839 | orchestrator | 2026-04-06 03:24:04.779850 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-06 03:24:04.779862 | orchestrator | Monday 06 April 2026 03:23:49 +0000 (0:00:03.856) 0:04:39.076 ********** 2026-04-06 03:24:04.779873 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:04.779886 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:04.779898 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:04.779910 | orchestrator | 2026-04-06 03:24:04.779921 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-06 03:24:04.779934 | orchestrator | Monday 06 April 2026 03:23:49 +0000 (0:00:00.332) 0:04:39.408 ********** 2026-04-06 03:24:04.779968 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:04.779983 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:04.779998 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:04.780015 | orchestrator | 2026-04-06 03:24:04.780026 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-06 03:24:04.780037 | orchestrator | Monday 06 April 2026 03:23:50 +0000 (0:00:00.577) 0:04:39.986 ********** 2026-04-06 03:24:04.780050 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:24:04.780064 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:24:04.780079 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:24:04.780094 | orchestrator | 2026-04-06 03:24:04.780108 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-06 03:24:04.780123 | orchestrator | Monday 06 April 2026 03:23:51 +0000 (0:00:01.390) 0:04:41.376 ********** 2026-04-06 03:24:04.780164 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-06 03:24:04.780202 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-06 03:24:04.780212 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-06 03:24:04.780221 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-06 03:24:04.780230 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-06 03:24:04.780239 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-06 03:24:04.780248 | orchestrator | 2026-04-06 03:24:04.780257 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-06 03:24:04.780266 | orchestrator | Monday 06 April 2026 03:23:55 +0000 (0:00:03.530) 0:04:44.906 ********** 2026-04-06 03:24:04.780276 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-06 03:24:04.780291 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-06 03:24:04.780305 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-06 03:24:04.780318 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-06 03:24:04.780332 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:24:04.780348 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-06 03:24:04.780364 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:24:04.780379 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-06 03:24:04.780394 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:24:04.780409 | orchestrator | 2026-04-06 03:24:04.780423 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-06 03:24:04.780432 | orchestrator | Monday 06 April 2026 03:23:58 +0000 (0:00:03.523) 0:04:48.430 ********** 2026-04-06 03:24:04.780441 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:04.780449 | orchestrator | 2026-04-06 03:24:04.780485 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-06 03:24:04.780502 | orchestrator | Monday 06 April 2026 03:23:58 +0000 (0:00:00.135) 0:04:48.565 ********** 2026-04-06 03:24:04.780516 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:04.780531 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:04.780546 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:04.780561 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:04.780576 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:04.780590 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:04.780605 | orchestrator | 2026-04-06 03:24:04.780615 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-06 03:24:04.780624 | orchestrator | Monday 06 April 2026 03:23:59 +0000 (0:00:00.890) 0:04:49.455 ********** 2026-04-06 03:24:04.780633 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 03:24:04.780727 | orchestrator | 2026-04-06 03:24:04.780738 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-06 03:24:04.780747 | orchestrator | Monday 06 April 2026 03:24:00 +0000 (0:00:00.737) 0:04:50.193 ********** 2026-04-06 03:24:04.780756 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:04.780765 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:04.780782 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:04.780791 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:04.780800 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:04.780809 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:04.780818 | orchestrator | 2026-04-06 03:24:04.780826 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-06 03:24:04.780835 | orchestrator | Monday 06 April 2026 03:24:01 +0000 (0:00:00.882) 0:04:51.076 ********** 2026-04-06 03:24:04.780848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:24:04.780874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:24:04.780884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:24:04.780904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:24:10.188850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189097 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:10.189368 | orchestrator | 2026-04-06 03:24:10.189388 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-06 03:24:10.189409 | orchestrator | Monday 06 April 2026 03:24:05 +0000 (0:00:03.915) 0:04:54.991 ********** 2026-04-06 03:24:10.189438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:24:12.532936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:24:12.533091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:24:12.533135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:24:12.533165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:24:12.533192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:24:12.533323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:12.533378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:12.533395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:24:12.533411 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:12.533481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:24:12.533496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:24:12.533520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:31.656600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:31.656765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:31.656781 | orchestrator | 2026-04-06 03:24:31.656795 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-06 03:24:31.656807 | orchestrator | Monday 06 April 2026 03:24:12 +0000 (0:00:07.200) 0:05:02.191 ********** 2026-04-06 03:24:31.656818 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:31.656829 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:31.656838 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:31.656848 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:31.656858 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:31.656868 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:31.656878 | orchestrator | 2026-04-06 03:24:31.656888 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-06 03:24:31.656898 | orchestrator | Monday 06 April 2026 03:24:14 +0000 (0:00:01.593) 0:05:03.785 ********** 2026-04-06 03:24:31.656908 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-06 03:24:31.656919 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-06 03:24:31.656929 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-06 03:24:31.656939 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-06 03:24:31.656949 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-06 03:24:31.656958 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-06 03:24:31.656968 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-06 03:24:31.656979 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:31.656988 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-06 03:24:31.656998 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:31.657008 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-06 03:24:31.657018 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:31.657028 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-06 03:24:31.657038 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-06 03:24:31.657048 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-06 03:24:31.657085 | orchestrator | 2026-04-06 03:24:31.657095 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-06 03:24:31.657106 | orchestrator | Monday 06 April 2026 03:24:18 +0000 (0:00:04.040) 0:05:07.826 ********** 2026-04-06 03:24:31.657116 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:31.657128 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:31.657139 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:31.657150 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:31.657162 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:31.657173 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:31.657183 | orchestrator | 2026-04-06 03:24:31.657194 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-06 03:24:31.657205 | orchestrator | Monday 06 April 2026 03:24:18 +0000 (0:00:00.680) 0:05:08.507 ********** 2026-04-06 03:24:31.657216 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-06 03:24:31.657228 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-06 03:24:31.657239 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-06 03:24:31.657250 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-06 03:24:31.657278 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-06 03:24:31.657289 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-06 03:24:31.657307 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-06 03:24:31.657318 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-06 03:24:31.657329 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-06 03:24:31.657340 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:31.657351 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-06 03:24:31.657362 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-06 03:24:31.657373 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-06 03:24:31.657384 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:31.657395 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-06 03:24:31.657406 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-06 03:24:31.657417 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-06 03:24:31.657445 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:31.657468 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-06 03:24:31.657480 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-06 03:24:31.657490 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-06 03:24:31.657500 | orchestrator | 2026-04-06 03:24:31.657510 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-06 03:24:31.657520 | orchestrator | Monday 06 April 2026 03:24:24 +0000 (0:00:05.724) 0:05:14.231 ********** 2026-04-06 03:24:31.657530 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 03:24:31.657548 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 03:24:31.657558 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 03:24:31.657568 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 03:24:31.657577 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 03:24:31.657587 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 03:24:31.657597 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-06 03:24:31.657607 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-06 03:24:31.657617 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-06 03:24:31.657626 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 03:24:31.657673 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 03:24:31.657684 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 03:24:31.657694 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-06 03:24:31.657703 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:31.657713 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 03:24:31.657723 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 03:24:31.657732 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-06 03:24:31.657742 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:31.657752 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-06 03:24:31.657762 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:31.657772 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 03:24:31.657781 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 03:24:31.657791 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 03:24:31.657801 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 03:24:31.657811 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 03:24:31.657827 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 03:24:36.648488 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 03:24:36.648590 | orchestrator | 2026-04-06 03:24:36.648608 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-06 03:24:36.648621 | orchestrator | Monday 06 April 2026 03:24:31 +0000 (0:00:07.064) 0:05:21.296 ********** 2026-04-06 03:24:36.648703 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:36.648719 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:36.648729 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:36.648753 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:36.648763 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:36.648769 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:36.648775 | orchestrator | 2026-04-06 03:24:36.648790 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-06 03:24:36.648796 | orchestrator | Monday 06 April 2026 03:24:32 +0000 (0:00:00.862) 0:05:22.158 ********** 2026-04-06 03:24:36.648803 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:36.648809 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:36.648815 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:36.648843 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:36.648849 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:36.648855 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:36.648862 | orchestrator | 2026-04-06 03:24:36.648868 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-06 03:24:36.648874 | orchestrator | Monday 06 April 2026 03:24:33 +0000 (0:00:00.704) 0:05:22.862 ********** 2026-04-06 03:24:36.648880 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:36.648886 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:36.648894 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:24:36.648900 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:36.648906 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:24:36.648913 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:24:36.648919 | orchestrator | 2026-04-06 03:24:36.648925 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-06 03:24:36.648931 | orchestrator | Monday 06 April 2026 03:24:35 +0000 (0:00:02.244) 0:05:25.107 ********** 2026-04-06 03:24:36.648941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:24:36.648951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:24:36.648960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 03:24:36.648967 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:36.648996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:24:36.649010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:24:36.649018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 03:24:36.649026 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:36.649033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 03:24:36.649041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 03:24:36.649054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 03:24:40.414325 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:40.414539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 03:24:40.414564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 03:24:40.414578 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:40.414590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 03:24:40.414602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 03:24:40.414614 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:40.414625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 03:24:40.414671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 03:24:40.414703 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:40.414715 | orchestrator | 2026-04-06 03:24:40.414727 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-06 03:24:40.414741 | orchestrator | Monday 06 April 2026 03:24:36 +0000 (0:00:01.459) 0:05:26.566 ********** 2026-04-06 03:24:40.414753 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-06 03:24:40.414786 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-06 03:24:40.414798 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:40.414809 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-06 03:24:40.414831 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-06 03:24:40.414844 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:40.414857 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-06 03:24:40.414870 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-06 03:24:40.414882 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:40.414895 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-06 03:24:40.414907 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-06 03:24:40.414920 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:40.414932 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-06 03:24:40.414944 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-06 03:24:40.414957 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:40.414970 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-06 03:24:40.414983 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-06 03:24:40.414995 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:40.415011 | orchestrator | 2026-04-06 03:24:40.415031 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-06 03:24:40.415048 | orchestrator | Monday 06 April 2026 03:24:37 +0000 (0:00:01.034) 0:05:27.601 ********** 2026-04-06 03:24:40.415069 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:24:40.415092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:24:40.415111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 03:24:40.415164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 03:24:42.859583 | orchestrator | 2026-04-06 03:24:42.859592 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 03:24:42.859600 | orchestrator | Monday 06 April 2026 03:24:40 +0000 (0:00:02.833) 0:05:30.435 ********** 2026-04-06 03:24:42.859608 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:24:42.859616 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:24:42.859623 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:24:42.859668 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:24:42.859676 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:24:42.859683 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:24:42.859690 | orchestrator | 2026-04-06 03:24:42.859698 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 03:24:42.859705 | orchestrator | Monday 06 April 2026 03:24:41 +0000 (0:00:00.917) 0:05:31.353 ********** 2026-04-06 03:24:42.859712 | orchestrator | 2026-04-06 03:24:42.859719 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 03:24:42.859727 | orchestrator | Monday 06 April 2026 03:24:41 +0000 (0:00:00.152) 0:05:31.505 ********** 2026-04-06 03:24:42.859734 | orchestrator | 2026-04-06 03:24:42.859741 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 03:24:42.859749 | orchestrator | Monday 06 April 2026 03:24:41 +0000 (0:00:00.147) 0:05:31.652 ********** 2026-04-06 03:24:42.859756 | orchestrator | 2026-04-06 03:24:42.859767 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 03:24:42.859780 | orchestrator | Monday 06 April 2026 03:24:42 +0000 (0:00:00.176) 0:05:31.828 ********** 2026-04-06 03:28:04.625889 | orchestrator | 2026-04-06 03:28:04.626093 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 03:28:04.626125 | orchestrator | Monday 06 April 2026 03:24:42 +0000 (0:00:00.149) 0:05:31.978 ********** 2026-04-06 03:28:04.626141 | orchestrator | 2026-04-06 03:28:04.626157 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 03:28:04.626171 | orchestrator | Monday 06 April 2026 03:24:42 +0000 (0:00:00.346) 0:05:32.324 ********** 2026-04-06 03:28:04.626186 | orchestrator | 2026-04-06 03:28:04.626203 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-06 03:28:04.626218 | orchestrator | Monday 06 April 2026 03:24:42 +0000 (0:00:00.176) 0:05:32.500 ********** 2026-04-06 03:28:04.626235 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:28:04.626251 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:28:04.626266 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:28:04.626280 | orchestrator | 2026-04-06 03:28:04.626323 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-06 03:28:04.626337 | orchestrator | Monday 06 April 2026 03:24:50 +0000 (0:00:07.245) 0:05:39.746 ********** 2026-04-06 03:28:04.626352 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:28:04.626367 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:28:04.626383 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:28:04.626398 | orchestrator | 2026-04-06 03:28:04.626413 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-06 03:28:04.626427 | orchestrator | Monday 06 April 2026 03:25:10 +0000 (0:00:20.107) 0:05:59.854 ********** 2026-04-06 03:28:04.626475 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:28:04.626492 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:28:04.626507 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:28:04.626521 | orchestrator | 2026-04-06 03:28:04.626536 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-06 03:28:04.626552 | orchestrator | Monday 06 April 2026 03:25:36 +0000 (0:00:26.801) 0:06:26.655 ********** 2026-04-06 03:28:04.626567 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:28:04.626582 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:28:04.626596 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:28:04.626663 | orchestrator | 2026-04-06 03:28:04.626677 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-06 03:28:04.626688 | orchestrator | Monday 06 April 2026 03:26:21 +0000 (0:00:44.062) 0:07:10.718 ********** 2026-04-06 03:28:04.626699 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:28:04.626709 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:28:04.626720 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:28:04.626731 | orchestrator | 2026-04-06 03:28:04.626740 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-06 03:28:04.626755 | orchestrator | Monday 06 April 2026 03:26:21 +0000 (0:00:00.887) 0:07:11.605 ********** 2026-04-06 03:28:04.626770 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:28:04.626790 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:28:04.626808 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:28:04.626823 | orchestrator | 2026-04-06 03:28:04.626838 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-06 03:28:04.626854 | orchestrator | Monday 06 April 2026 03:26:22 +0000 (0:00:00.800) 0:07:12.405 ********** 2026-04-06 03:28:04.626869 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:28:04.626881 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:28:04.626896 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:28:04.626912 | orchestrator | 2026-04-06 03:28:04.626927 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-06 03:28:04.626943 | orchestrator | Monday 06 April 2026 03:26:53 +0000 (0:00:31.048) 0:07:43.453 ********** 2026-04-06 03:28:04.626958 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:28:04.626974 | orchestrator | 2026-04-06 03:28:04.626989 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-06 03:28:04.627006 | orchestrator | Monday 06 April 2026 03:26:53 +0000 (0:00:00.151) 0:07:43.605 ********** 2026-04-06 03:28:04.627023 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:28:04.627040 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:28:04.627056 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:04.627067 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:04.627076 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:04.627085 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-06 03:28:04.627097 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:28:04.627106 | orchestrator | 2026-04-06 03:28:04.627114 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-06 03:28:04.627123 | orchestrator | Monday 06 April 2026 03:27:16 +0000 (0:00:22.658) 0:08:06.264 ********** 2026-04-06 03:28:04.627132 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:04.627141 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:28:04.627149 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:28:04.627158 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:04.627167 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:04.627175 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:28:04.627184 | orchestrator | 2026-04-06 03:28:04.627192 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-06 03:28:04.627201 | orchestrator | Monday 06 April 2026 03:27:27 +0000 (0:00:10.750) 0:08:17.014 ********** 2026-04-06 03:28:04.627222 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:28:04.627231 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:28:04.627240 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:04.627248 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:04.627257 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:04.627266 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-04-06 03:28:04.627275 | orchestrator | 2026-04-06 03:28:04.627284 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-06 03:28:04.627308 | orchestrator | Monday 06 April 2026 03:27:31 +0000 (0:00:04.322) 0:08:21.337 ********** 2026-04-06 03:28:04.627317 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:28:04.627326 | orchestrator | 2026-04-06 03:28:04.627358 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-06 03:28:04.627368 | orchestrator | Monday 06 April 2026 03:27:44 +0000 (0:00:12.486) 0:08:33.824 ********** 2026-04-06 03:28:04.627376 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:28:04.627385 | orchestrator | 2026-04-06 03:28:04.627394 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-06 03:28:04.627403 | orchestrator | Monday 06 April 2026 03:27:45 +0000 (0:00:01.668) 0:08:35.493 ********** 2026-04-06 03:28:04.627412 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:28:04.627420 | orchestrator | 2026-04-06 03:28:04.627429 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-06 03:28:04.627438 | orchestrator | Monday 06 April 2026 03:27:47 +0000 (0:00:01.844) 0:08:37.337 ********** 2026-04-06 03:28:04.627446 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 03:28:04.627455 | orchestrator | 2026-04-06 03:28:04.627463 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-06 03:28:04.627472 | orchestrator | Monday 06 April 2026 03:27:59 +0000 (0:00:11.637) 0:08:48.975 ********** 2026-04-06 03:28:04.627481 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:28:04.627490 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:28:04.627499 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:28:04.627507 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:04.627516 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:04.627524 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:04.627533 | orchestrator | 2026-04-06 03:28:04.627542 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-06 03:28:04.627550 | orchestrator | 2026-04-06 03:28:04.627559 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-06 03:28:04.627568 | orchestrator | Monday 06 April 2026 03:28:01 +0000 (0:00:01.916) 0:08:50.892 ********** 2026-04-06 03:28:04.627576 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:28:04.627585 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:28:04.627594 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:28:04.627603 | orchestrator | 2026-04-06 03:28:04.627668 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-06 03:28:04.627693 | orchestrator | 2026-04-06 03:28:04.627708 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-06 03:28:04.627721 | orchestrator | Monday 06 April 2026 03:28:02 +0000 (0:00:01.038) 0:08:51.930 ********** 2026-04-06 03:28:04.627735 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:04.627750 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:04.627763 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:04.627778 | orchestrator | 2026-04-06 03:28:04.627791 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-06 03:28:04.627889 | orchestrator | 2026-04-06 03:28:04.627910 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-06 03:28:04.627925 | orchestrator | Monday 06 April 2026 03:28:03 +0000 (0:00:00.838) 0:08:52.769 ********** 2026-04-06 03:28:04.627938 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-06 03:28:04.627952 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-06 03:28:04.628000 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-06 03:28:04.628015 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-06 03:28:04.628029 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-06 03:28:04.628042 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-06 03:28:04.628056 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:28:04.628069 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-06 03:28:04.628083 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-06 03:28:04.628097 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-06 03:28:04.628112 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-06 03:28:04.628125 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-06 03:28:04.628139 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-06 03:28:04.628169 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:28:04.628183 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-06 03:28:04.628198 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-06 03:28:04.628211 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-06 03:28:04.628226 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-06 03:28:04.628239 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-06 03:28:04.628254 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-06 03:28:04.628267 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:28:04.628280 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-06 03:28:04.628295 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-06 03:28:04.628309 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-06 03:28:04.628322 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-06 03:28:04.628337 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-06 03:28:04.628351 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-06 03:28:04.628366 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:04.628380 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-06 03:28:04.628396 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-06 03:28:04.628410 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-06 03:28:04.628425 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-06 03:28:04.628451 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-06 03:28:04.628466 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-06 03:28:04.628498 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:07.169250 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-06 03:28:07.169366 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-06 03:28:07.169383 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-06 03:28:07.169398 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-06 03:28:07.169410 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-06 03:28:07.169421 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-06 03:28:07.169432 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:07.169443 | orchestrator | 2026-04-06 03:28:07.169455 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-06 03:28:07.169466 | orchestrator | 2026-04-06 03:28:07.169477 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-06 03:28:07.169488 | orchestrator | Monday 06 April 2026 03:28:04 +0000 (0:00:01.511) 0:08:54.280 ********** 2026-04-06 03:28:07.169528 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-06 03:28:07.169540 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-06 03:28:07.169551 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:07.169561 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-06 03:28:07.169572 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-06 03:28:07.169583 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:07.169594 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-06 03:28:07.169604 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-06 03:28:07.169709 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:07.169721 | orchestrator | 2026-04-06 03:28:07.169732 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-06 03:28:07.169743 | orchestrator | 2026-04-06 03:28:07.169756 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-06 03:28:07.169769 | orchestrator | Monday 06 April 2026 03:28:05 +0000 (0:00:00.578) 0:08:54.859 ********** 2026-04-06 03:28:07.169782 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:07.169794 | orchestrator | 2026-04-06 03:28:07.169808 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-06 03:28:07.169821 | orchestrator | 2026-04-06 03:28:07.169833 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-06 03:28:07.169846 | orchestrator | Monday 06 April 2026 03:28:06 +0000 (0:00:00.950) 0:08:55.809 ********** 2026-04-06 03:28:07.169859 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:07.169871 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:07.169883 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:07.169896 | orchestrator | 2026-04-06 03:28:07.169909 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:28:07.169922 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:28:07.169938 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-04-06 03:28:07.169951 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-06 03:28:07.169964 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-06 03:28:07.169977 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-06 03:28:07.169990 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-06 03:28:07.170002 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-06 03:28:07.170079 | orchestrator | 2026-04-06 03:28:07.170096 | orchestrator | 2026-04-06 03:28:07.170109 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:28:07.170122 | orchestrator | Monday 06 April 2026 03:28:06 +0000 (0:00:00.529) 0:08:56.338 ********** 2026-04-06 03:28:07.170133 | orchestrator | =============================================================================== 2026-04-06 03:28:07.170144 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.06s 2026-04-06 03:28:07.170155 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.51s 2026-04-06 03:28:07.170166 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 31.05s 2026-04-06 03:28:07.170177 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.80s 2026-04-06 03:28:07.170188 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.66s 2026-04-06 03:28:07.170210 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.81s 2026-04-06 03:28:07.170221 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 20.11s 2026-04-06 03:28:07.170232 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.81s 2026-04-06 03:28:07.170258 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.88s 2026-04-06 03:28:07.170269 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.66s 2026-04-06 03:28:07.170301 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.76s 2026-04-06 03:28:07.170313 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.49s 2026-04-06 03:28:07.170324 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.39s 2026-04-06 03:28:07.170335 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.92s 2026-04-06 03:28:07.170346 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.64s 2026-04-06 03:28:07.170357 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.75s 2026-04-06 03:28:07.170367 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.49s 2026-04-06 03:28:07.170378 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.66s 2026-04-06 03:28:07.170389 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.60s 2026-04-06 03:28:07.170400 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.25s 2026-04-06 03:28:10.031964 | orchestrator | 2026-04-06 03:28:10 | INFO  | Task 5573b78a-0d91-4972-af90-651699ddf2a6 (horizon) was prepared for execution. 2026-04-06 03:28:10.032065 | orchestrator | 2026-04-06 03:28:10 | INFO  | It takes a moment until task 5573b78a-0d91-4972-af90-651699ddf2a6 (horizon) has been started and output is visible here. 2026-04-06 03:28:18.029215 | orchestrator | 2026-04-06 03:28:18.029296 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:28:18.029302 | orchestrator | 2026-04-06 03:28:18.029307 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:28:18.029312 | orchestrator | Monday 06 April 2026 03:28:14 +0000 (0:00:00.289) 0:00:00.289 ********** 2026-04-06 03:28:18.029316 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:18.029321 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:18.029325 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:18.029330 | orchestrator | 2026-04-06 03:28:18.029334 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:28:18.029338 | orchestrator | Monday 06 April 2026 03:28:15 +0000 (0:00:00.348) 0:00:00.638 ********** 2026-04-06 03:28:18.029341 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-06 03:28:18.029346 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-06 03:28:18.029350 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-06 03:28:18.029354 | orchestrator | 2026-04-06 03:28:18.029358 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-06 03:28:18.029362 | orchestrator | 2026-04-06 03:28:18.029366 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-06 03:28:18.029370 | orchestrator | Monday 06 April 2026 03:28:15 +0000 (0:00:00.464) 0:00:01.102 ********** 2026-04-06 03:28:18.029375 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:28:18.029379 | orchestrator | 2026-04-06 03:28:18.029391 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-06 03:28:18.029395 | orchestrator | Monday 06 April 2026 03:28:16 +0000 (0:00:00.562) 0:00:01.665 ********** 2026-04-06 03:28:18.029415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 03:28:18.029451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 03:28:18.029464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 03:28:18.029468 | orchestrator | 2026-04-06 03:28:18.029472 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-06 03:28:18.029476 | orchestrator | Monday 06 April 2026 03:28:17 +0000 (0:00:01.307) 0:00:02.973 ********** 2026-04-06 03:28:18.029480 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:18.029484 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:18.029488 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:18.029491 | orchestrator | 2026-04-06 03:28:18.029495 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-06 03:28:18.029499 | orchestrator | Monday 06 April 2026 03:28:17 +0000 (0:00:00.536) 0:00:03.509 ********** 2026-04-06 03:28:18.029506 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-06 03:28:24.830971 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-06 03:28:24.831070 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-06 03:28:24.831082 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-06 03:28:24.831090 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-06 03:28:24.831099 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-06 03:28:24.831107 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-06 03:28:24.831115 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-06 03:28:24.831124 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-06 03:28:24.831152 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-06 03:28:24.831161 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-06 03:28:24.831169 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-06 03:28:24.831177 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-06 03:28:24.831186 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-06 03:28:24.831194 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-06 03:28:24.831202 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-06 03:28:24.831210 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-06 03:28:24.831223 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-06 03:28:24.831237 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-06 03:28:24.831251 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-06 03:28:24.831266 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-06 03:28:24.831276 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-06 03:28:24.831284 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-06 03:28:24.831293 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-06 03:28:24.831302 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-06 03:28:24.831313 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-06 03:28:24.831321 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-06 03:28:24.831329 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-06 03:28:24.831337 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-06 03:28:24.831360 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-06 03:28:24.831374 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-06 03:28:24.831387 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-06 03:28:24.831400 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-06 03:28:24.831415 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-06 03:28:24.831428 | orchestrator | 2026-04-06 03:28:24.831441 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 03:28:24.831456 | orchestrator | Monday 06 April 2026 03:28:18 +0000 (0:00:00.862) 0:00:04.371 ********** 2026-04-06 03:28:24.831469 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:24.831483 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:24.831496 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:24.831523 | orchestrator | 2026-04-06 03:28:24.831537 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 03:28:24.831550 | orchestrator | Monday 06 April 2026 03:28:19 +0000 (0:00:00.374) 0:00:04.746 ********** 2026-04-06 03:28:24.831564 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.831579 | orchestrator | 2026-04-06 03:28:24.831606 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 03:28:24.831679 | orchestrator | Monday 06 April 2026 03:28:19 +0000 (0:00:00.365) 0:00:05.112 ********** 2026-04-06 03:28:24.831692 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.831706 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:24.831719 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:24.831734 | orchestrator | 2026-04-06 03:28:24.831757 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 03:28:24.831772 | orchestrator | Monday 06 April 2026 03:28:19 +0000 (0:00:00.389) 0:00:05.501 ********** 2026-04-06 03:28:24.831786 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:24.831797 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:24.831806 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:24.831813 | orchestrator | 2026-04-06 03:28:24.831822 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 03:28:24.831830 | orchestrator | Monday 06 April 2026 03:28:20 +0000 (0:00:00.372) 0:00:05.874 ********** 2026-04-06 03:28:24.831838 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.831846 | orchestrator | 2026-04-06 03:28:24.831853 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 03:28:24.831861 | orchestrator | Monday 06 April 2026 03:28:20 +0000 (0:00:00.157) 0:00:06.031 ********** 2026-04-06 03:28:24.831869 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.831878 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:24.831886 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:24.831894 | orchestrator | 2026-04-06 03:28:24.831902 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 03:28:24.831910 | orchestrator | Monday 06 April 2026 03:28:20 +0000 (0:00:00.312) 0:00:06.344 ********** 2026-04-06 03:28:24.831918 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:24.831926 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:24.831934 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:24.831942 | orchestrator | 2026-04-06 03:28:24.831950 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 03:28:24.831958 | orchestrator | Monday 06 April 2026 03:28:21 +0000 (0:00:00.592) 0:00:06.936 ********** 2026-04-06 03:28:24.831966 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.831974 | orchestrator | 2026-04-06 03:28:24.831982 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 03:28:24.831990 | orchestrator | Monday 06 April 2026 03:28:21 +0000 (0:00:00.150) 0:00:07.087 ********** 2026-04-06 03:28:24.831998 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.832005 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:24.832013 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:24.832021 | orchestrator | 2026-04-06 03:28:24.832029 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 03:28:24.832037 | orchestrator | Monday 06 April 2026 03:28:21 +0000 (0:00:00.401) 0:00:07.489 ********** 2026-04-06 03:28:24.832045 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:24.832053 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:24.832061 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:24.832069 | orchestrator | 2026-04-06 03:28:24.832077 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 03:28:24.832084 | orchestrator | Monday 06 April 2026 03:28:22 +0000 (0:00:00.344) 0:00:07.834 ********** 2026-04-06 03:28:24.832092 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.832100 | orchestrator | 2026-04-06 03:28:24.832108 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 03:28:24.832116 | orchestrator | Monday 06 April 2026 03:28:22 +0000 (0:00:00.137) 0:00:07.971 ********** 2026-04-06 03:28:24.832132 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.832140 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:24.832148 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:24.832156 | orchestrator | 2026-04-06 03:28:24.832164 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 03:28:24.832172 | orchestrator | Monday 06 April 2026 03:28:22 +0000 (0:00:00.575) 0:00:08.546 ********** 2026-04-06 03:28:24.832179 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:24.832187 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:24.832195 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:24.832203 | orchestrator | 2026-04-06 03:28:24.832211 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 03:28:24.832226 | orchestrator | Monday 06 April 2026 03:28:23 +0000 (0:00:00.341) 0:00:08.887 ********** 2026-04-06 03:28:24.832234 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.832242 | orchestrator | 2026-04-06 03:28:24.832250 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 03:28:24.832258 | orchestrator | Monday 06 April 2026 03:28:23 +0000 (0:00:00.142) 0:00:09.030 ********** 2026-04-06 03:28:24.832265 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.832273 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:24.832281 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:24.832289 | orchestrator | 2026-04-06 03:28:24.832297 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 03:28:24.832305 | orchestrator | Monday 06 April 2026 03:28:23 +0000 (0:00:00.348) 0:00:09.379 ********** 2026-04-06 03:28:24.832313 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:24.832321 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:24.832329 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:24.832336 | orchestrator | 2026-04-06 03:28:24.832344 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 03:28:24.832352 | orchestrator | Monday 06 April 2026 03:28:24 +0000 (0:00:00.360) 0:00:09.739 ********** 2026-04-06 03:28:24.832360 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.832368 | orchestrator | 2026-04-06 03:28:24.832376 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 03:28:24.832384 | orchestrator | Monday 06 April 2026 03:28:24 +0000 (0:00:00.352) 0:00:10.092 ********** 2026-04-06 03:28:24.832392 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:24.832400 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:24.832408 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:24.832415 | orchestrator | 2026-04-06 03:28:24.832423 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 03:28:24.832439 | orchestrator | Monday 06 April 2026 03:28:24 +0000 (0:00:00.335) 0:00:10.428 ********** 2026-04-06 03:28:39.843172 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:39.843289 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:39.843306 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:39.843318 | orchestrator | 2026-04-06 03:28:39.843330 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 03:28:39.843343 | orchestrator | Monday 06 April 2026 03:28:25 +0000 (0:00:00.366) 0:00:10.795 ********** 2026-04-06 03:28:39.843354 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:39.843367 | orchestrator | 2026-04-06 03:28:39.843378 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 03:28:39.843389 | orchestrator | Monday 06 April 2026 03:28:25 +0000 (0:00:00.163) 0:00:10.958 ********** 2026-04-06 03:28:39.843400 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:39.843411 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:39.843422 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:39.843433 | orchestrator | 2026-04-06 03:28:39.843444 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 03:28:39.843455 | orchestrator | Monday 06 April 2026 03:28:25 +0000 (0:00:00.314) 0:00:11.273 ********** 2026-04-06 03:28:39.843496 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:39.843515 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:39.843532 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:39.843552 | orchestrator | 2026-04-06 03:28:39.843573 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 03:28:39.843592 | orchestrator | Monday 06 April 2026 03:28:26 +0000 (0:00:00.639) 0:00:11.913 ********** 2026-04-06 03:28:39.843636 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:39.843649 | orchestrator | 2026-04-06 03:28:39.843660 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 03:28:39.843672 | orchestrator | Monday 06 April 2026 03:28:26 +0000 (0:00:00.154) 0:00:12.068 ********** 2026-04-06 03:28:39.843682 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:39.843693 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:39.843704 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:39.843728 | orchestrator | 2026-04-06 03:28:39.843741 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 03:28:39.843754 | orchestrator | Monday 06 April 2026 03:28:26 +0000 (0:00:00.350) 0:00:12.418 ********** 2026-04-06 03:28:39.843766 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:39.843779 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:39.843791 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:39.843803 | orchestrator | 2026-04-06 03:28:39.843816 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 03:28:39.843828 | orchestrator | Monday 06 April 2026 03:28:27 +0000 (0:00:00.403) 0:00:12.821 ********** 2026-04-06 03:28:39.843841 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:39.843854 | orchestrator | 2026-04-06 03:28:39.843866 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 03:28:39.843879 | orchestrator | Monday 06 April 2026 03:28:27 +0000 (0:00:00.150) 0:00:12.972 ********** 2026-04-06 03:28:39.843892 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:39.843905 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:39.843916 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:39.843927 | orchestrator | 2026-04-06 03:28:39.843938 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 03:28:39.843949 | orchestrator | Monday 06 April 2026 03:28:27 +0000 (0:00:00.583) 0:00:13.556 ********** 2026-04-06 03:28:39.843960 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:28:39.843971 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:28:39.843982 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:28:39.843993 | orchestrator | 2026-04-06 03:28:39.844004 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 03:28:39.844015 | orchestrator | Monday 06 April 2026 03:28:28 +0000 (0:00:00.440) 0:00:13.996 ********** 2026-04-06 03:28:39.844026 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:39.844037 | orchestrator | 2026-04-06 03:28:39.844048 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 03:28:39.844059 | orchestrator | Monday 06 April 2026 03:28:28 +0000 (0:00:00.129) 0:00:14.125 ********** 2026-04-06 03:28:39.844070 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:39.844081 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:39.844092 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:39.844103 | orchestrator | 2026-04-06 03:28:39.844130 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-06 03:28:39.844149 | orchestrator | Monday 06 April 2026 03:28:28 +0000 (0:00:00.352) 0:00:14.478 ********** 2026-04-06 03:28:39.844169 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:28:39.844187 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:28:39.844206 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:28:39.844219 | orchestrator | 2026-04-06 03:28:39.844231 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-06 03:28:39.844242 | orchestrator | Monday 06 April 2026 03:28:30 +0000 (0:00:01.904) 0:00:16.382 ********** 2026-04-06 03:28:39.844265 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-06 03:28:39.844277 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-06 03:28:39.844288 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-06 03:28:39.844299 | orchestrator | 2026-04-06 03:28:39.844309 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-06 03:28:39.844320 | orchestrator | Monday 06 April 2026 03:28:32 +0000 (0:00:01.889) 0:00:18.272 ********** 2026-04-06 03:28:39.844336 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-06 03:28:39.844355 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-06 03:28:39.844373 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-06 03:28:39.844391 | orchestrator | 2026-04-06 03:28:39.844403 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-06 03:28:39.844434 | orchestrator | Monday 06 April 2026 03:28:34 +0000 (0:00:01.869) 0:00:20.142 ********** 2026-04-06 03:28:39.844447 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-06 03:28:39.844471 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-06 03:28:39.844497 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-06 03:28:39.844515 | orchestrator | 2026-04-06 03:28:39.844532 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-06 03:28:39.844548 | orchestrator | Monday 06 April 2026 03:28:36 +0000 (0:00:01.618) 0:00:21.760 ********** 2026-04-06 03:28:39.844565 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:39.844581 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:39.844598 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:39.844650 | orchestrator | 2026-04-06 03:28:39.844669 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-06 03:28:39.844688 | orchestrator | Monday 06 April 2026 03:28:36 +0000 (0:00:00.546) 0:00:22.307 ********** 2026-04-06 03:28:39.844707 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:39.844726 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:39.844744 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:39.844760 | orchestrator | 2026-04-06 03:28:39.844771 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-06 03:28:39.844782 | orchestrator | Monday 06 April 2026 03:28:37 +0000 (0:00:00.397) 0:00:22.704 ********** 2026-04-06 03:28:39.844794 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:28:39.844805 | orchestrator | 2026-04-06 03:28:39.844816 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-06 03:28:39.844827 | orchestrator | Monday 06 April 2026 03:28:37 +0000 (0:00:00.691) 0:00:23.395 ********** 2026-04-06 03:28:39.844855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 03:28:39.844900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 03:28:40.537151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 03:28:40.537251 | orchestrator | 2026-04-06 03:28:40.537260 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-06 03:28:40.537288 | orchestrator | Monday 06 April 2026 03:28:39 +0000 (0:00:02.040) 0:00:25.436 ********** 2026-04-06 03:28:40.537308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 03:28:40.537319 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:40.537334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 03:28:40.537339 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:40.537347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 03:28:42.874702 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:42.874766 | orchestrator | 2026-04-06 03:28:42.874775 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-06 03:28:42.874796 | orchestrator | Monday 06 April 2026 03:28:40 +0000 (0:00:00.700) 0:00:26.137 ********** 2026-04-06 03:28:42.874836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 03:28:42.874851 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:28:42.874877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 03:28:42.874907 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:28:42.874919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 03:28:42.874930 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:28:42.874940 | orchestrator | 2026-04-06 03:28:42.874946 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-06 03:28:42.874977 | orchestrator | Monday 06 April 2026 03:28:41 +0000 (0:00:00.883) 0:00:27.021 ********** 2026-04-06 03:28:42.874995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 03:29:32.423280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 03:29:32.423437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 03:29:32.423484 | orchestrator | 2026-04-06 03:29:32.423499 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-06 03:29:32.423517 | orchestrator | Monday 06 April 2026 03:28:42 +0000 (0:00:01.456) 0:00:28.477 ********** 2026-04-06 03:29:32.423533 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:29:32.423551 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:29:32.423567 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:29:32.423583 | orchestrator | 2026-04-06 03:29:32.423599 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-06 03:29:32.423653 | orchestrator | Monday 06 April 2026 03:28:43 +0000 (0:00:00.280) 0:00:28.758 ********** 2026-04-06 03:29:32.423670 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:29:32.423686 | orchestrator | 2026-04-06 03:29:32.423703 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-06 03:29:32.423720 | orchestrator | Monday 06 April 2026 03:28:43 +0000 (0:00:00.540) 0:00:29.299 ********** 2026-04-06 03:29:32.423735 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:29:32.423751 | orchestrator | 2026-04-06 03:29:32.423766 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-06 03:29:32.423782 | orchestrator | Monday 06 April 2026 03:28:45 +0000 (0:00:02.143) 0:00:31.442 ********** 2026-04-06 03:29:32.423798 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:29:32.423845 | orchestrator | 2026-04-06 03:29:32.423861 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-06 03:29:32.423876 | orchestrator | Monday 06 April 2026 03:28:48 +0000 (0:00:02.610) 0:00:34.053 ********** 2026-04-06 03:29:32.423892 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:29:32.423907 | orchestrator | 2026-04-06 03:29:32.423922 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-06 03:29:32.423954 | orchestrator | Monday 06 April 2026 03:29:05 +0000 (0:00:16.812) 0:00:50.865 ********** 2026-04-06 03:29:32.423969 | orchestrator | 2026-04-06 03:29:32.423985 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-06 03:29:32.424001 | orchestrator | Monday 06 April 2026 03:29:05 +0000 (0:00:00.078) 0:00:50.944 ********** 2026-04-06 03:29:32.424017 | orchestrator | 2026-04-06 03:29:32.424031 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-06 03:29:32.424046 | orchestrator | Monday 06 April 2026 03:29:05 +0000 (0:00:00.088) 0:00:51.032 ********** 2026-04-06 03:29:32.424063 | orchestrator | 2026-04-06 03:29:32.424078 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-06 03:29:32.424094 | orchestrator | Monday 06 April 2026 03:29:05 +0000 (0:00:00.088) 0:00:51.121 ********** 2026-04-06 03:29:32.424108 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:29:32.424122 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:29:32.424137 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:29:32.424151 | orchestrator | 2026-04-06 03:29:32.424168 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:29:32.424184 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-06 03:29:32.424200 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-06 03:29:32.424215 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-06 03:29:32.424230 | orchestrator | 2026-04-06 03:29:32.424246 | orchestrator | 2026-04-06 03:29:32.424262 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:29:32.424278 | orchestrator | Monday 06 April 2026 03:29:32 +0000 (0:00:26.883) 0:01:18.004 ********** 2026-04-06 03:29:32.424294 | orchestrator | =============================================================================== 2026-04-06 03:29:32.424309 | orchestrator | horizon : Restart horizon container ------------------------------------ 26.88s 2026-04-06 03:29:32.424324 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.81s 2026-04-06 03:29:32.424339 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.61s 2026-04-06 03:29:32.424355 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.14s 2026-04-06 03:29:32.424369 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.04s 2026-04-06 03:29:32.424385 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.90s 2026-04-06 03:29:32.424402 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.89s 2026-04-06 03:29:32.424430 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.87s 2026-04-06 03:29:32.424446 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.62s 2026-04-06 03:29:32.424461 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.46s 2026-04-06 03:29:32.424476 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.31s 2026-04-06 03:29:32.424493 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2026-04-06 03:29:32.424508 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.86s 2026-04-06 03:29:32.424546 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2026-04-06 03:29:32.904912 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2026-04-06 03:29:32.905014 | orchestrator | horizon : Update policy file name --------------------------------------- 0.64s 2026-04-06 03:29:32.905027 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-04-06 03:29:32.905035 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.58s 2026-04-06 03:29:32.905067 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.58s 2026-04-06 03:29:32.905076 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-04-06 03:29:35.698523 | orchestrator | 2026-04-06 03:29:35 | INFO  | Task 77c5c077-8955-4622-9532-361620c20dc9 (skyline) was prepared for execution. 2026-04-06 03:29:35.698750 | orchestrator | 2026-04-06 03:29:35 | INFO  | It takes a moment until task 77c5c077-8955-4622-9532-361620c20dc9 (skyline) has been started and output is visible here. 2026-04-06 03:30:07.666123 | orchestrator | 2026-04-06 03:30:07.666227 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:30:07.666238 | orchestrator | 2026-04-06 03:30:07.666246 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:30:07.666254 | orchestrator | Monday 06 April 2026 03:29:40 +0000 (0:00:00.316) 0:00:00.316 ********** 2026-04-06 03:30:07.666262 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:30:07.666270 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:30:07.666277 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:30:07.666285 | orchestrator | 2026-04-06 03:30:07.666292 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:30:07.666300 | orchestrator | Monday 06 April 2026 03:29:40 +0000 (0:00:00.330) 0:00:00.646 ********** 2026-04-06 03:30:07.666307 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-06 03:30:07.666314 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-06 03:30:07.666322 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-06 03:30:07.666329 | orchestrator | 2026-04-06 03:30:07.666336 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-06 03:30:07.666343 | orchestrator | 2026-04-06 03:30:07.666350 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-06 03:30:07.666358 | orchestrator | Monday 06 April 2026 03:29:41 +0000 (0:00:00.504) 0:00:01.151 ********** 2026-04-06 03:30:07.666365 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:30:07.666372 | orchestrator | 2026-04-06 03:30:07.666380 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-04-06 03:30:07.666387 | orchestrator | Monday 06 April 2026 03:29:41 +0000 (0:00:00.608) 0:00:01.759 ********** 2026-04-06 03:30:07.666394 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-04-06 03:30:07.666401 | orchestrator | 2026-04-06 03:30:07.666407 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-04-06 03:30:07.666413 | orchestrator | Monday 06 April 2026 03:29:45 +0000 (0:00:03.470) 0:00:05.230 ********** 2026-04-06 03:30:07.666419 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-04-06 03:30:07.666425 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-04-06 03:30:07.666431 | orchestrator | 2026-04-06 03:30:07.666437 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-04-06 03:30:07.666445 | orchestrator | Monday 06 April 2026 03:29:51 +0000 (0:00:06.485) 0:00:11.716 ********** 2026-04-06 03:30:07.666452 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:30:07.666461 | orchestrator | 2026-04-06 03:30:07.666469 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-04-06 03:30:07.666476 | orchestrator | Monday 06 April 2026 03:29:55 +0000 (0:00:03.185) 0:00:14.901 ********** 2026-04-06 03:30:07.666484 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:30:07.666491 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-04-06 03:30:07.666498 | orchestrator | 2026-04-06 03:30:07.666505 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-04-06 03:30:07.666513 | orchestrator | Monday 06 April 2026 03:29:59 +0000 (0:00:04.040) 0:00:18.941 ********** 2026-04-06 03:30:07.666548 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:30:07.666555 | orchestrator | 2026-04-06 03:30:07.666562 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-04-06 03:30:07.666570 | orchestrator | Monday 06 April 2026 03:30:02 +0000 (0:00:03.182) 0:00:22.124 ********** 2026-04-06 03:30:07.666577 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-04-06 03:30:07.666584 | orchestrator | 2026-04-06 03:30:07.666591 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-06 03:30:07.666658 | orchestrator | Monday 06 April 2026 03:30:06 +0000 (0:00:03.943) 0:00:26.067 ********** 2026-04-06 03:30:07.666672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:07.666700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:07.666709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:07.666718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:07.666740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:07.666755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:11.824873 | orchestrator | 2026-04-06 03:30:11.824945 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-06 03:30:11.824953 | orchestrator | Monday 06 April 2026 03:30:07 +0000 (0:00:01.405) 0:00:27.473 ********** 2026-04-06 03:30:11.824966 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:30:11.824971 | orchestrator | 2026-04-06 03:30:11.824976 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-06 03:30:11.824980 | orchestrator | Monday 06 April 2026 03:30:08 +0000 (0:00:00.878) 0:00:28.352 ********** 2026-04-06 03:30:11.824987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:11.824994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:11.825028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:11.825044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:11.825051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:11.825056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:11.825064 | orchestrator | 2026-04-06 03:30:11.825069 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-06 03:30:11.825073 | orchestrator | Monday 06 April 2026 03:30:11 +0000 (0:00:02.553) 0:00:30.905 ********** 2026-04-06 03:30:11.825081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-06 03:30:11.825086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-06 03:30:11.825090 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:30:11.825099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-06 03:30:13.261527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-06 03:30:13.261701 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:30:13.261733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-06 03:30:13.261741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-06 03:30:13.261747 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:30:13.261753 | orchestrator | 2026-04-06 03:30:13.261760 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-06 03:30:13.261767 | orchestrator | Monday 06 April 2026 03:30:11 +0000 (0:00:00.726) 0:00:31.632 ********** 2026-04-06 03:30:13.261773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-06 03:30:13.261807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-06 03:30:13.261815 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:30:13.261826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-06 03:30:13.261833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-06 03:30:13.261839 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:30:13.261846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-06 03:30:13.261858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-06 03:30:22.264134 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:30:22.264248 | orchestrator | 2026-04-06 03:30:22.264266 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-06 03:30:22.264279 | orchestrator | Monday 06 April 2026 03:30:13 +0000 (0:00:01.424) 0:00:33.057 ********** 2026-04-06 03:30:22.264308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:22.264323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:22.264334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:22.264370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:22.264401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:22.264416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:22.264427 | orchestrator | 2026-04-06 03:30:22.264437 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-06 03:30:22.264447 | orchestrator | Monday 06 April 2026 03:30:15 +0000 (0:00:02.526) 0:00:35.583 ********** 2026-04-06 03:30:22.264457 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-06 03:30:22.264468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-06 03:30:22.264479 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-06 03:30:22.264489 | orchestrator | 2026-04-06 03:30:22.264499 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-06 03:30:22.264509 | orchestrator | Monday 06 April 2026 03:30:17 +0000 (0:00:01.569) 0:00:37.153 ********** 2026-04-06 03:30:22.264519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-06 03:30:22.264529 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-06 03:30:22.264539 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-06 03:30:22.264560 | orchestrator | 2026-04-06 03:30:22.264571 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-06 03:30:22.264581 | orchestrator | Monday 06 April 2026 03:30:19 +0000 (0:00:02.271) 0:00:39.424 ********** 2026-04-06 03:30:22.264593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:22.264636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:24.538383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:24.538492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:24.538530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:24.538541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:24.538552 | orchestrator | 2026-04-06 03:30:24.538564 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-06 03:30:24.538575 | orchestrator | Monday 06 April 2026 03:30:22 +0000 (0:00:02.648) 0:00:42.073 ********** 2026-04-06 03:30:24.538585 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:30:24.538596 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:30:24.538606 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:30:24.538670 | orchestrator | 2026-04-06 03:30:24.538698 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-04-06 03:30:24.538709 | orchestrator | Monday 06 April 2026 03:30:22 +0000 (0:00:00.341) 0:00:42.415 ********** 2026-04-06 03:30:24.538727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:24.538739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:24.538757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:24.538767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:24.538801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:54.533473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-06 03:30:54.533586 | orchestrator | 2026-04-06 03:30:54.533598 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-04-06 03:30:54.533608 | orchestrator | Monday 06 April 2026 03:30:24 +0000 (0:00:01.923) 0:00:44.338 ********** 2026-04-06 03:30:54.533661 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:30:54.533671 | orchestrator | 2026-04-06 03:30:54.533680 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-04-06 03:30:54.533688 | orchestrator | Monday 06 April 2026 03:30:26 +0000 (0:00:02.094) 0:00:46.433 ********** 2026-04-06 03:30:54.533697 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:30:54.533705 | orchestrator | 2026-04-06 03:30:54.533714 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-04-06 03:30:54.533722 | orchestrator | Monday 06 April 2026 03:30:28 +0000 (0:00:02.237) 0:00:48.670 ********** 2026-04-06 03:30:54.533731 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:30:54.533739 | orchestrator | 2026-04-06 03:30:54.533747 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-06 03:30:54.533756 | orchestrator | Monday 06 April 2026 03:30:36 +0000 (0:00:07.934) 0:00:56.605 ********** 2026-04-06 03:30:54.533765 | orchestrator | 2026-04-06 03:30:54.533773 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-06 03:30:54.533782 | orchestrator | Monday 06 April 2026 03:30:36 +0000 (0:00:00.072) 0:00:56.677 ********** 2026-04-06 03:30:54.533790 | orchestrator | 2026-04-06 03:30:54.533799 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-06 03:30:54.533807 | orchestrator | Monday 06 April 2026 03:30:36 +0000 (0:00:00.074) 0:00:56.751 ********** 2026-04-06 03:30:54.533816 | orchestrator | 2026-04-06 03:30:54.533824 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-06 03:30:54.533833 | orchestrator | Monday 06 April 2026 03:30:37 +0000 (0:00:00.076) 0:00:56.828 ********** 2026-04-06 03:30:54.533841 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:30:54.533850 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:30:54.533858 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:30:54.533866 | orchestrator | 2026-04-06 03:30:54.533875 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-06 03:30:54.533883 | orchestrator | Monday 06 April 2026 03:30:43 +0000 (0:00:06.793) 0:01:03.621 ********** 2026-04-06 03:30:54.533892 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:30:54.533900 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:30:54.533909 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:30:54.533917 | orchestrator | 2026-04-06 03:30:54.533926 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:30:54.533936 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 03:30:54.533946 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 03:30:54.533954 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 03:30:54.533963 | orchestrator | 2026-04-06 03:30:54.533971 | orchestrator | 2026-04-06 03:30:54.533980 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:30:54.533989 | orchestrator | Monday 06 April 2026 03:30:54 +0000 (0:00:10.319) 0:01:13.940 ********** 2026-04-06 03:30:54.533997 | orchestrator | =============================================================================== 2026-04-06 03:30:54.534006 | orchestrator | skyline : Restart skyline-console container ---------------------------- 10.32s 2026-04-06 03:30:54.534071 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.93s 2026-04-06 03:30:54.534082 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 6.79s 2026-04-06 03:30:54.534090 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.49s 2026-04-06 03:30:54.534098 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.04s 2026-04-06 03:30:54.534121 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.94s 2026-04-06 03:30:54.534130 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.47s 2026-04-06 03:30:54.534138 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.19s 2026-04-06 03:30:54.534160 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.18s 2026-04-06 03:30:54.534166 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.65s 2026-04-06 03:30:54.534171 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.55s 2026-04-06 03:30:54.534176 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.53s 2026-04-06 03:30:54.534182 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.27s 2026-04-06 03:30:54.534187 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.24s 2026-04-06 03:30:54.534192 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.09s 2026-04-06 03:30:54.534197 | orchestrator | skyline : Check skyline container --------------------------------------- 1.92s 2026-04-06 03:30:54.534202 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.57s 2026-04-06 03:30:54.534207 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.42s 2026-04-06 03:30:54.534212 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.41s 2026-04-06 03:30:54.534218 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.88s 2026-04-06 03:30:57.128970 | orchestrator | 2026-04-06 03:30:57 | INFO  | Task 109cf2d2-7990-42f7-9069-c7daf46a845b (glance) was prepared for execution. 2026-04-06 03:30:57.129075 | orchestrator | 2026-04-06 03:30:57 | INFO  | It takes a moment until task 109cf2d2-7990-42f7-9069-c7daf46a845b (glance) has been started and output is visible here. 2026-04-06 03:31:31.835394 | orchestrator | 2026-04-06 03:31:31.835520 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:31:31.835533 | orchestrator | 2026-04-06 03:31:31.835541 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:31:31.835548 | orchestrator | Monday 06 April 2026 03:31:01 +0000 (0:00:00.294) 0:00:00.294 ********** 2026-04-06 03:31:31.835555 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:31:31.835564 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:31:31.835571 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:31:31.835577 | orchestrator | 2026-04-06 03:31:31.835584 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:31:31.835591 | orchestrator | Monday 06 April 2026 03:31:01 +0000 (0:00:00.351) 0:00:00.645 ********** 2026-04-06 03:31:31.835598 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-06 03:31:31.835605 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-06 03:31:31.835612 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-06 03:31:31.835641 | orchestrator | 2026-04-06 03:31:31.835647 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-06 03:31:31.835654 | orchestrator | 2026-04-06 03:31:31.835661 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 03:31:31.835667 | orchestrator | Monday 06 April 2026 03:31:02 +0000 (0:00:00.477) 0:00:01.123 ********** 2026-04-06 03:31:31.835677 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:31:31.835724 | orchestrator | 2026-04-06 03:31:31.835735 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-06 03:31:31.835746 | orchestrator | Monday 06 April 2026 03:31:03 +0000 (0:00:00.641) 0:00:01.765 ********** 2026-04-06 03:31:31.835757 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-06 03:31:31.835763 | orchestrator | 2026-04-06 03:31:31.835770 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-06 03:31:31.835776 | orchestrator | Monday 06 April 2026 03:31:06 +0000 (0:00:03.546) 0:00:05.311 ********** 2026-04-06 03:31:31.835783 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-06 03:31:31.835790 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-06 03:31:31.835796 | orchestrator | 2026-04-06 03:31:31.835803 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-06 03:31:31.835809 | orchestrator | Monday 06 April 2026 03:31:13 +0000 (0:00:06.498) 0:00:11.810 ********** 2026-04-06 03:31:31.835815 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:31:31.835823 | orchestrator | 2026-04-06 03:31:31.835830 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-06 03:31:31.835837 | orchestrator | Monday 06 April 2026 03:31:16 +0000 (0:00:03.136) 0:00:14.946 ********** 2026-04-06 03:31:31.835843 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:31:31.835850 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-06 03:31:31.835857 | orchestrator | 2026-04-06 03:31:31.835863 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-06 03:31:31.835869 | orchestrator | Monday 06 April 2026 03:31:20 +0000 (0:00:04.088) 0:00:19.035 ********** 2026-04-06 03:31:31.835876 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:31:31.835883 | orchestrator | 2026-04-06 03:31:31.835889 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-06 03:31:31.835895 | orchestrator | Monday 06 April 2026 03:31:23 +0000 (0:00:03.142) 0:00:22.177 ********** 2026-04-06 03:31:31.835905 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-06 03:31:31.835916 | orchestrator | 2026-04-06 03:31:31.835951 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-06 03:31:31.835963 | orchestrator | Monday 06 April 2026 03:31:27 +0000 (0:00:03.796) 0:00:25.974 ********** 2026-04-06 03:31:31.836005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:31:31.836030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:31:31.836049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:31:31.836062 | orchestrator | 2026-04-06 03:31:31.836073 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 03:31:31.836084 | orchestrator | Monday 06 April 2026 03:31:30 +0000 (0:00:03.715) 0:00:29.689 ********** 2026-04-06 03:31:31.836096 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:31:31.836105 | orchestrator | 2026-04-06 03:31:31.836123 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-06 03:31:47.965437 | orchestrator | Monday 06 April 2026 03:31:31 +0000 (0:00:00.829) 0:00:30.519 ********** 2026-04-06 03:31:47.965540 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:31:47.965553 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:31:47.965561 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:31:47.965570 | orchestrator | 2026-04-06 03:31:47.965579 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-06 03:31:47.965587 | orchestrator | Monday 06 April 2026 03:31:35 +0000 (0:00:03.857) 0:00:34.376 ********** 2026-04-06 03:31:47.965596 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-06 03:31:47.965606 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-06 03:31:47.965614 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-06 03:31:47.965667 | orchestrator | 2026-04-06 03:31:47.965676 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-06 03:31:47.965684 | orchestrator | Monday 06 April 2026 03:31:37 +0000 (0:00:01.569) 0:00:35.946 ********** 2026-04-06 03:31:47.965693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-06 03:31:47.965701 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-06 03:31:47.965710 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-06 03:31:47.965718 | orchestrator | 2026-04-06 03:31:47.965726 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-06 03:31:47.965734 | orchestrator | Monday 06 April 2026 03:31:38 +0000 (0:00:01.426) 0:00:37.372 ********** 2026-04-06 03:31:47.965742 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:31:47.965751 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:31:47.965759 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:31:47.965768 | orchestrator | 2026-04-06 03:31:47.965776 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-06 03:31:47.965784 | orchestrator | Monday 06 April 2026 03:31:39 +0000 (0:00:00.684) 0:00:38.057 ********** 2026-04-06 03:31:47.965792 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:31:47.965800 | orchestrator | 2026-04-06 03:31:47.965808 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-06 03:31:47.965817 | orchestrator | Monday 06 April 2026 03:31:39 +0000 (0:00:00.136) 0:00:38.193 ********** 2026-04-06 03:31:47.965825 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:31:47.965833 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:31:47.965842 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:31:47.965850 | orchestrator | 2026-04-06 03:31:47.965858 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 03:31:47.965866 | orchestrator | Monday 06 April 2026 03:31:39 +0000 (0:00:00.297) 0:00:38.490 ********** 2026-04-06 03:31:47.965874 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:31:47.965883 | orchestrator | 2026-04-06 03:31:47.965891 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-06 03:31:47.965899 | orchestrator | Monday 06 April 2026 03:31:40 +0000 (0:00:00.793) 0:00:39.284 ********** 2026-04-06 03:31:47.965930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:31:47.965982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:31:47.966000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:31:47.966068 | orchestrator | 2026-04-06 03:31:47.966079 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-06 03:31:47.966089 | orchestrator | Monday 06 April 2026 03:31:44 +0000 (0:00:04.120) 0:00:43.405 ********** 2026-04-06 03:31:47.966106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 03:31:51.887293 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:31:51.887459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 03:31:51.887518 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:31:51.887533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 03:31:51.887545 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:31:51.887557 | orchestrator | 2026-04-06 03:31:51.887569 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-06 03:31:51.887582 | orchestrator | Monday 06 April 2026 03:31:47 +0000 (0:00:03.244) 0:00:46.649 ********** 2026-04-06 03:31:51.887663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 03:31:51.887691 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:31:51.887710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 03:31:51.887722 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:31:51.887744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 03:32:30.598352 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:32:30.598464 | orchestrator | 2026-04-06 03:32:30.598476 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-06 03:32:30.598487 | orchestrator | Monday 06 April 2026 03:31:51 +0000 (0:00:03.923) 0:00:50.573 ********** 2026-04-06 03:32:30.598494 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:32:30.598502 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:32:30.598510 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:32:30.598541 | orchestrator | 2026-04-06 03:32:30.598549 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-06 03:32:30.598557 | orchestrator | Monday 06 April 2026 03:31:55 +0000 (0:00:03.720) 0:00:54.294 ********** 2026-04-06 03:32:30.598583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:32:30.598594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:32:30.598683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:32:30.598703 | orchestrator | 2026-04-06 03:32:30.598712 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-06 03:32:30.598719 | orchestrator | Monday 06 April 2026 03:32:00 +0000 (0:00:04.453) 0:00:58.747 ********** 2026-04-06 03:32:30.598727 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:32:30.598735 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:32:30.598743 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:32:30.598752 | orchestrator | 2026-04-06 03:32:30.598761 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-06 03:32:30.598769 | orchestrator | Monday 06 April 2026 03:32:06 +0000 (0:00:06.017) 0:01:04.765 ********** 2026-04-06 03:32:30.598777 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:32:30.598786 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:32:30.598795 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:32:30.598804 | orchestrator | 2026-04-06 03:32:30.598813 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-06 03:32:30.598820 | orchestrator | Monday 06 April 2026 03:32:10 +0000 (0:00:04.405) 0:01:09.171 ********** 2026-04-06 03:32:30.598828 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:32:30.598836 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:32:30.598844 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:32:30.598852 | orchestrator | 2026-04-06 03:32:30.598860 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-06 03:32:30.598869 | orchestrator | Monday 06 April 2026 03:32:13 +0000 (0:00:03.485) 0:01:12.657 ********** 2026-04-06 03:32:30.598877 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:32:30.598885 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:32:30.598893 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:32:30.598902 | orchestrator | 2026-04-06 03:32:30.598911 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-06 03:32:30.598920 | orchestrator | Monday 06 April 2026 03:32:17 +0000 (0:00:03.946) 0:01:16.603 ********** 2026-04-06 03:32:30.598929 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:32:30.598937 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:32:30.598946 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:32:30.598954 | orchestrator | 2026-04-06 03:32:30.598960 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-06 03:32:30.598966 | orchestrator | Monday 06 April 2026 03:32:21 +0000 (0:00:03.750) 0:01:20.354 ********** 2026-04-06 03:32:30.598971 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:32:30.598977 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:32:30.598982 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:32:30.598987 | orchestrator | 2026-04-06 03:32:30.598998 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-06 03:32:30.599003 | orchestrator | Monday 06 April 2026 03:32:22 +0000 (0:00:00.612) 0:01:20.966 ********** 2026-04-06 03:32:30.599009 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-06 03:32:30.599016 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:32:30.599021 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-06 03:32:30.599027 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:32:30.599032 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-06 03:32:30.599038 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:32:30.599042 | orchestrator | 2026-04-06 03:32:30.599047 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-06 03:32:30.599052 | orchestrator | Monday 06 April 2026 03:32:25 +0000 (0:00:03.724) 0:01:24.691 ********** 2026-04-06 03:32:30.599056 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:32:30.599061 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:32:30.599066 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:32:30.599070 | orchestrator | 2026-04-06 03:32:30.599075 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-06 03:32:30.599085 | orchestrator | Monday 06 April 2026 03:32:30 +0000 (0:00:04.594) 0:01:29.285 ********** 2026-04-06 03:33:52.371017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:33:52.371166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:33:52.371251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 03:33:52.371269 | orchestrator | 2026-04-06 03:33:52.371284 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 03:33:52.371299 | orchestrator | Monday 06 April 2026 03:32:34 +0000 (0:00:04.122) 0:01:33.408 ********** 2026-04-06 03:33:52.371311 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:33:52.371324 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:33:52.371335 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:33:52.371348 | orchestrator | 2026-04-06 03:33:52.371361 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-06 03:33:52.371374 | orchestrator | Monday 06 April 2026 03:32:35 +0000 (0:00:00.562) 0:01:33.971 ********** 2026-04-06 03:33:52.371387 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:33:52.371400 | orchestrator | 2026-04-06 03:33:52.371413 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-06 03:33:52.371425 | orchestrator | Monday 06 April 2026 03:32:37 +0000 (0:00:02.094) 0:01:36.065 ********** 2026-04-06 03:33:52.371438 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:33:52.371452 | orchestrator | 2026-04-06 03:33:52.371464 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-06 03:33:52.371477 | orchestrator | Monday 06 April 2026 03:32:39 +0000 (0:00:02.303) 0:01:38.369 ********** 2026-04-06 03:33:52.371489 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:33:52.371501 | orchestrator | 2026-04-06 03:33:52.371514 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-06 03:33:52.371542 | orchestrator | Monday 06 April 2026 03:32:41 +0000 (0:00:02.058) 0:01:40.427 ********** 2026-04-06 03:33:52.371557 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:33:52.371571 | orchestrator | 2026-04-06 03:33:52.371586 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-06 03:33:52.371601 | orchestrator | Monday 06 April 2026 03:33:10 +0000 (0:00:28.712) 0:02:09.140 ********** 2026-04-06 03:33:52.371614 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:33:52.371628 | orchestrator | 2026-04-06 03:33:52.371669 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-06 03:33:52.371683 | orchestrator | Monday 06 April 2026 03:33:12 +0000 (0:00:02.059) 0:02:11.200 ********** 2026-04-06 03:33:52.371696 | orchestrator | 2026-04-06 03:33:52.371709 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-06 03:33:52.371721 | orchestrator | Monday 06 April 2026 03:33:12 +0000 (0:00:00.075) 0:02:11.275 ********** 2026-04-06 03:33:52.371734 | orchestrator | 2026-04-06 03:33:52.371746 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-06 03:33:52.371759 | orchestrator | Monday 06 April 2026 03:33:12 +0000 (0:00:00.074) 0:02:11.349 ********** 2026-04-06 03:33:52.371771 | orchestrator | 2026-04-06 03:33:52.371784 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-06 03:33:52.371795 | orchestrator | Monday 06 April 2026 03:33:12 +0000 (0:00:00.072) 0:02:11.422 ********** 2026-04-06 03:33:52.371809 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:33:52.371821 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:33:52.371833 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:33:52.371845 | orchestrator | 2026-04-06 03:33:52.371857 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:33:52.371871 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-06 03:33:52.371886 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-06 03:33:52.371898 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-06 03:33:52.371910 | orchestrator | 2026-04-06 03:33:52.371922 | orchestrator | 2026-04-06 03:33:52.371934 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:33:52.371946 | orchestrator | Monday 06 April 2026 03:33:52 +0000 (0:00:39.617) 0:02:51.039 ********** 2026-04-06 03:33:52.371959 | orchestrator | =============================================================================== 2026-04-06 03:33:52.371971 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.62s 2026-04-06 03:33:52.371984 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.71s 2026-04-06 03:33:52.371996 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.50s 2026-04-06 03:33:52.372024 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.02s 2026-04-06 03:33:52.783104 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.59s 2026-04-06 03:33:52.783214 | orchestrator | glance : Copying over config.json files for services -------------------- 4.45s 2026-04-06 03:33:52.783231 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.41s 2026-04-06 03:33:52.783245 | orchestrator | glance : Check glance containers ---------------------------------------- 4.12s 2026-04-06 03:33:52.783258 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.12s 2026-04-06 03:33:52.783271 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.09s 2026-04-06 03:33:52.783285 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.95s 2026-04-06 03:33:52.783298 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.92s 2026-04-06 03:33:52.783363 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.86s 2026-04-06 03:33:52.783377 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.80s 2026-04-06 03:33:52.783391 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.75s 2026-04-06 03:33:52.783404 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.72s 2026-04-06 03:33:52.783419 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.72s 2026-04-06 03:33:52.783432 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.72s 2026-04-06 03:33:52.783446 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.55s 2026-04-06 03:33:52.783459 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.49s 2026-04-06 03:33:55.435472 | orchestrator | 2026-04-06 03:33:55 | INFO  | Task 9d8a6865-f775-4605-a193-c22973d4fbea (cinder) was prepared for execution. 2026-04-06 03:33:55.435603 | orchestrator | 2026-04-06 03:33:55 | INFO  | It takes a moment until task 9d8a6865-f775-4605-a193-c22973d4fbea (cinder) has been started and output is visible here. 2026-04-06 03:34:32.024397 | orchestrator | 2026-04-06 03:34:32.024515 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:34:32.024560 | orchestrator | 2026-04-06 03:34:32.024585 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:34:32.024596 | orchestrator | Monday 06 April 2026 03:34:00 +0000 (0:00:00.286) 0:00:00.286 ********** 2026-04-06 03:34:32.024606 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:34:32.024616 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:34:32.024626 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:34:32.024667 | orchestrator | 2026-04-06 03:34:32.024679 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:34:32.024688 | orchestrator | Monday 06 April 2026 03:34:00 +0000 (0:00:00.346) 0:00:00.633 ********** 2026-04-06 03:34:32.024699 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-06 03:34:32.024710 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-06 03:34:32.024721 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-06 03:34:32.024732 | orchestrator | 2026-04-06 03:34:32.024743 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-06 03:34:32.024754 | orchestrator | 2026-04-06 03:34:32.024765 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 03:34:32.024773 | orchestrator | Monday 06 April 2026 03:34:00 +0000 (0:00:00.476) 0:00:01.109 ********** 2026-04-06 03:34:32.024780 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:34:32.024787 | orchestrator | 2026-04-06 03:34:32.024794 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-06 03:34:32.024800 | orchestrator | Monday 06 April 2026 03:34:01 +0000 (0:00:00.581) 0:00:01.690 ********** 2026-04-06 03:34:32.024807 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-06 03:34:32.024813 | orchestrator | 2026-04-06 03:34:32.024820 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-06 03:34:32.024827 | orchestrator | Monday 06 April 2026 03:34:05 +0000 (0:00:03.588) 0:00:05.279 ********** 2026-04-06 03:34:32.024834 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-06 03:34:32.024841 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-06 03:34:32.024847 | orchestrator | 2026-04-06 03:34:32.024854 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-06 03:34:32.024860 | orchestrator | Monday 06 April 2026 03:34:11 +0000 (0:00:06.828) 0:00:12.108 ********** 2026-04-06 03:34:32.024889 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:34:32.024896 | orchestrator | 2026-04-06 03:34:32.024903 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-06 03:34:32.024909 | orchestrator | Monday 06 April 2026 03:34:15 +0000 (0:00:03.178) 0:00:15.286 ********** 2026-04-06 03:34:32.024915 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:34:32.024922 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-06 03:34:32.024928 | orchestrator | 2026-04-06 03:34:32.024935 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-06 03:34:32.024941 | orchestrator | Monday 06 April 2026 03:34:19 +0000 (0:00:04.214) 0:00:19.500 ********** 2026-04-06 03:34:32.024949 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:34:32.024956 | orchestrator | 2026-04-06 03:34:32.024963 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-06 03:34:32.024971 | orchestrator | Monday 06 April 2026 03:34:22 +0000 (0:00:03.177) 0:00:22.678 ********** 2026-04-06 03:34:32.024978 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-06 03:34:32.024986 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-06 03:34:32.024993 | orchestrator | 2026-04-06 03:34:32.025000 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-06 03:34:32.025008 | orchestrator | Monday 06 April 2026 03:34:29 +0000 (0:00:07.467) 0:00:30.146 ********** 2026-04-06 03:34:32.025030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:34:32.025058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:34:32.025066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:34:32.025079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:32.025086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:32.025096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:32.025103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:32.025117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:38.178933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:38.179054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:38.179068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:38.179089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:38.179098 | orchestrator | 2026-04-06 03:34:38.179107 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 03:34:38.179117 | orchestrator | Monday 06 April 2026 03:34:32 +0000 (0:00:02.138) 0:00:32.284 ********** 2026-04-06 03:34:38.179124 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:34:38.179133 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:34:38.179140 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:34:38.179147 | orchestrator | 2026-04-06 03:34:38.179155 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 03:34:38.179162 | orchestrator | Monday 06 April 2026 03:34:32 +0000 (0:00:00.585) 0:00:32.869 ********** 2026-04-06 03:34:38.179170 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:34:38.179178 | orchestrator | 2026-04-06 03:34:38.179185 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-06 03:34:38.179193 | orchestrator | Monday 06 April 2026 03:34:33 +0000 (0:00:00.611) 0:00:33.481 ********** 2026-04-06 03:34:38.179200 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-06 03:34:38.179208 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-06 03:34:38.179215 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-06 03:34:38.179223 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-06 03:34:38.179230 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-06 03:34:38.179245 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-06 03:34:38.179252 | orchestrator | 2026-04-06 03:34:38.179260 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-06 03:34:38.179267 | orchestrator | Monday 06 April 2026 03:34:35 +0000 (0:00:01.696) 0:00:35.178 ********** 2026-04-06 03:34:38.179291 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-06 03:34:38.179302 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-06 03:34:38.179315 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-06 03:34:38.179323 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-06 03:34:38.179337 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-06 03:34:49.334494 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-06 03:34:49.334628 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-06 03:34:49.334709 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-06 03:34:49.334748 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-06 03:34:49.334760 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-06 03:34:49.334815 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-06 03:34:49.334827 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-06 03:34:49.334844 | orchestrator | 2026-04-06 03:34:49.334859 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-06 03:34:49.334886 | orchestrator | Monday 06 April 2026 03:34:38 +0000 (0:00:03.564) 0:00:38.742 ********** 2026-04-06 03:34:49.334902 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-06 03:34:49.334918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-06 03:34:49.334932 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-06 03:34:49.334947 | orchestrator | 2026-04-06 03:34:49.334960 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-06 03:34:49.334974 | orchestrator | Monday 06 April 2026 03:34:40 +0000 (0:00:01.562) 0:00:40.305 ********** 2026-04-06 03:34:49.334990 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-06 03:34:49.335004 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-06 03:34:49.335018 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-06 03:34:49.335033 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-06 03:34:49.335047 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-06 03:34:49.335064 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-06 03:34:49.335079 | orchestrator | 2026-04-06 03:34:49.335103 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-06 03:34:49.335118 | orchestrator | Monday 06 April 2026 03:34:42 +0000 (0:00:02.781) 0:00:43.087 ********** 2026-04-06 03:34:49.335133 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-06 03:34:49.335150 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-06 03:34:49.335165 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-06 03:34:49.335180 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-06 03:34:49.335204 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-06 03:34:49.335215 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-06 03:34:49.335226 | orchestrator | 2026-04-06 03:34:49.335236 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-06 03:34:49.335246 | orchestrator | Monday 06 April 2026 03:34:43 +0000 (0:00:01.068) 0:00:44.155 ********** 2026-04-06 03:34:49.335256 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:34:49.335265 | orchestrator | 2026-04-06 03:34:49.335274 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-06 03:34:49.335282 | orchestrator | Monday 06 April 2026 03:34:44 +0000 (0:00:00.146) 0:00:44.302 ********** 2026-04-06 03:34:49.335291 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:34:49.335300 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:34:49.335308 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:34:49.335317 | orchestrator | 2026-04-06 03:34:49.335325 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 03:34:49.335334 | orchestrator | Monday 06 April 2026 03:34:44 +0000 (0:00:00.546) 0:00:44.849 ********** 2026-04-06 03:34:49.335344 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:34:49.335353 | orchestrator | 2026-04-06 03:34:49.335361 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-06 03:34:49.335370 | orchestrator | Monday 06 April 2026 03:34:45 +0000 (0:00:00.623) 0:00:45.473 ********** 2026-04-06 03:34:49.335391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:34:50.360375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:34:50.360543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:34:50.360597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:50.360621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:50.360701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:50.360751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:50.360773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:50.360793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:50.360840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:50.360862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:50.360883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:50.360952 | orchestrator | 2026-04-06 03:34:50.360968 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-06 03:34:50.360983 | orchestrator | Monday 06 April 2026 03:34:49 +0000 (0:00:04.105) 0:00:49.578 ********** 2026-04-06 03:34:50.361009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 03:34:50.485024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:34:50.485150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 03:34:50.485162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 03:34:50.485171 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:34:50.485180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 03:34:50.485189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:34:50.485212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 03:34:50.485226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 03:34:50.485237 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:34:50.485245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 03:34:50.485252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:34:50.485259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 03:34:50.485267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 03:34:50.485274 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:34:50.485286 | orchestrator | 2026-04-06 03:34:50.485294 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-06 03:34:50.485306 | orchestrator | Monday 06 April 2026 03:34:50 +0000 (0:00:01.055) 0:00:50.634 ********** 2026-04-06 03:34:51.155749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 03:34:51.155892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:34:51.155911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 03:34:51.155924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 03:34:51.155935 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:34:51.155948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 03:34:51.156000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:34:51.156017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 03:34:51.156028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 03:34:51.156038 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:34:51.156048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 03:34:51.156059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:34:51.156077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 03:34:55.791126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 03:34:55.791255 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:34:55.791273 | orchestrator | 2026-04-06 03:34:55.791286 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-06 03:34:55.791314 | orchestrator | Monday 06 April 2026 03:34:51 +0000 (0:00:01.009) 0:00:51.643 ********** 2026-04-06 03:34:55.791328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:34:55.791342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:34:55.791354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:34:55.791409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:55.791424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:55.791441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:55.791454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:55.791468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:55.791479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:34:55.791506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:09.227292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:09.227393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:09.227400 | orchestrator | 2026-04-06 03:35:09.227406 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-06 03:35:09.227411 | orchestrator | Monday 06 April 2026 03:34:55 +0000 (0:00:04.392) 0:00:56.035 ********** 2026-04-06 03:35:09.227416 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-06 03:35:09.227421 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-06 03:35:09.227425 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-06 03:35:09.227429 | orchestrator | 2026-04-06 03:35:09.227433 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-06 03:35:09.227438 | orchestrator | Monday 06 April 2026 03:34:57 +0000 (0:00:01.978) 0:00:58.014 ********** 2026-04-06 03:35:09.227443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:35:09.227465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:35:09.227480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:35:09.227489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:09.227493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:09.227497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:09.227505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:09.227511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:09.227520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:11.854322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:11.854405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:11.854413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:11.854437 | orchestrator | 2026-04-06 03:35:11.854453 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-06 03:35:11.854471 | orchestrator | Monday 06 April 2026 03:35:09 +0000 (0:00:11.454) 0:01:09.469 ********** 2026-04-06 03:35:11.854479 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:35:11.854488 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:35:11.854495 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:35:11.854503 | orchestrator | 2026-04-06 03:35:11.854510 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-06 03:35:11.854517 | orchestrator | Monday 06 April 2026 03:35:10 +0000 (0:00:01.548) 0:01:11.017 ********** 2026-04-06 03:35:11.854527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 03:35:11.854537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:35:11.854569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 03:35:11.854579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 03:35:11.854596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 03:35:11.854604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:35:11.854612 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:35:11.854621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 03:35:11.854641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 03:35:15.678628 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:35:15.678829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-06 03:35:15.678876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:35:15.678891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 03:35:15.678905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 03:35:15.678916 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:35:15.678928 | orchestrator | 2026-04-06 03:35:15.678941 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-06 03:35:15.678953 | orchestrator | Monday 06 April 2026 03:35:11 +0000 (0:00:01.085) 0:01:12.103 ********** 2026-04-06 03:35:15.678980 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:35:15.679002 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:35:15.679013 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:35:15.679024 | orchestrator | 2026-04-06 03:35:15.679035 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-06 03:35:15.679046 | orchestrator | Monday 06 April 2026 03:35:12 +0000 (0:00:00.670) 0:01:12.774 ********** 2026-04-06 03:35:15.679093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:35:15.679118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:35:15.679130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-06 03:35:15.679142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:15.679156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:15.679170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:35:15.679198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:36:56.249506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:36:56.249732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 03:36:56.249759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:36:56.249772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:36:56.249804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 03:36:56.249846 | orchestrator | 2026-04-06 03:36:56.249862 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 03:36:56.249889 | orchestrator | Monday 06 April 2026 03:35:15 +0000 (0:00:03.144) 0:01:15.918 ********** 2026-04-06 03:36:56.249917 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:36:56.249935 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:36:56.249952 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:36:56.249969 | orchestrator | 2026-04-06 03:36:56.249986 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-06 03:36:56.250004 | orchestrator | Monday 06 April 2026 03:35:16 +0000 (0:00:00.320) 0:01:16.238 ********** 2026-04-06 03:36:56.250096 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:36:56.250117 | orchestrator | 2026-04-06 03:36:56.250155 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-06 03:36:56.250179 | orchestrator | Monday 06 April 2026 03:35:18 +0000 (0:00:02.114) 0:01:18.353 ********** 2026-04-06 03:36:56.250204 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:36:56.250223 | orchestrator | 2026-04-06 03:36:56.250240 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-06 03:36:56.250258 | orchestrator | Monday 06 April 2026 03:35:20 +0000 (0:00:02.394) 0:01:20.747 ********** 2026-04-06 03:36:56.250274 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:36:56.250291 | orchestrator | 2026-04-06 03:36:56.250309 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-06 03:36:56.250328 | orchestrator | Monday 06 April 2026 03:35:40 +0000 (0:00:19.870) 0:01:40.617 ********** 2026-04-06 03:36:56.250346 | orchestrator | 2026-04-06 03:36:56.250365 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-06 03:36:56.250383 | orchestrator | Monday 06 April 2026 03:35:40 +0000 (0:00:00.081) 0:01:40.698 ********** 2026-04-06 03:36:56.250397 | orchestrator | 2026-04-06 03:36:56.250408 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-06 03:36:56.250419 | orchestrator | Monday 06 April 2026 03:35:40 +0000 (0:00:00.085) 0:01:40.784 ********** 2026-04-06 03:36:56.250430 | orchestrator | 2026-04-06 03:36:56.250442 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-06 03:36:56.250458 | orchestrator | Monday 06 April 2026 03:35:40 +0000 (0:00:00.095) 0:01:40.879 ********** 2026-04-06 03:36:56.250475 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:36:56.250493 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:36:56.250510 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:36:56.250528 | orchestrator | 2026-04-06 03:36:56.250545 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-06 03:36:56.250568 | orchestrator | Monday 06 April 2026 03:36:13 +0000 (0:00:33.101) 0:02:13.981 ********** 2026-04-06 03:36:56.250594 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:36:56.250611 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:36:56.250630 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:36:56.250649 | orchestrator | 2026-04-06 03:36:56.250740 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-06 03:36:56.250754 | orchestrator | Monday 06 April 2026 03:36:24 +0000 (0:00:10.244) 0:02:24.225 ********** 2026-04-06 03:36:56.250765 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:36:56.250777 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:36:56.250787 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:36:56.250798 | orchestrator | 2026-04-06 03:36:56.250809 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-06 03:36:56.250823 | orchestrator | Monday 06 April 2026 03:36:47 +0000 (0:00:23.267) 0:02:47.493 ********** 2026-04-06 03:36:56.250842 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:36:56.250873 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:36:56.250890 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:36:56.250908 | orchestrator | 2026-04-06 03:36:56.250925 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-06 03:36:56.250961 | orchestrator | Monday 06 April 2026 03:36:55 +0000 (0:00:08.599) 0:02:56.092 ********** 2026-04-06 03:36:56.250977 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:36:56.250993 | orchestrator | 2026-04-06 03:36:56.251010 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:36:56.251031 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-06 03:36:56.251050 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 03:36:56.251069 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 03:36:56.251087 | orchestrator | 2026-04-06 03:36:56.251107 | orchestrator | 2026-04-06 03:36:56.251119 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:36:56.251136 | orchestrator | Monday 06 April 2026 03:36:56 +0000 (0:00:00.294) 0:02:56.387 ********** 2026-04-06 03:36:56.251163 | orchestrator | =============================================================================== 2026-04-06 03:36:56.251183 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 33.10s 2026-04-06 03:36:56.251199 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.27s 2026-04-06 03:36:56.251214 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.87s 2026-04-06 03:36:56.251229 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.45s 2026-04-06 03:36:56.251246 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.24s 2026-04-06 03:36:56.251276 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.60s 2026-04-06 03:36:56.251292 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.47s 2026-04-06 03:36:56.251309 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.83s 2026-04-06 03:36:56.251324 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.39s 2026-04-06 03:36:56.251342 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.21s 2026-04-06 03:36:56.251359 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.11s 2026-04-06 03:36:56.251375 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.59s 2026-04-06 03:36:56.251391 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.56s 2026-04-06 03:36:56.251408 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.18s 2026-04-06 03:36:56.251443 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.18s 2026-04-06 03:36:56.674519 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.14s 2026-04-06 03:36:56.674615 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.78s 2026-04-06 03:36:56.674627 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.39s 2026-04-06 03:36:56.674635 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.14s 2026-04-06 03:36:56.674642 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.11s 2026-04-06 03:36:59.386407 | orchestrator | 2026-04-06 03:36:59 | INFO  | Task 3d9d6672-9f02-47ba-a731-1c4692071504 (barbican) was prepared for execution. 2026-04-06 03:36:59.386517 | orchestrator | 2026-04-06 03:36:59 | INFO  | It takes a moment until task 3d9d6672-9f02-47ba-a731-1c4692071504 (barbican) has been started and output is visible here. 2026-04-06 03:37:44.090965 | orchestrator | 2026-04-06 03:37:44.091061 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:37:44.091072 | orchestrator | 2026-04-06 03:37:44.091080 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:37:44.091129 | orchestrator | Monday 06 April 2026 03:37:04 +0000 (0:00:00.288) 0:00:00.288 ********** 2026-04-06 03:37:44.091136 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:37:44.091144 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:37:44.091150 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:37:44.091156 | orchestrator | 2026-04-06 03:37:44.091163 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:37:44.091170 | orchestrator | Monday 06 April 2026 03:37:04 +0000 (0:00:00.368) 0:00:00.656 ********** 2026-04-06 03:37:44.091177 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-06 03:37:44.091184 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-06 03:37:44.091190 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-06 03:37:44.091196 | orchestrator | 2026-04-06 03:37:44.091203 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-06 03:37:44.091209 | orchestrator | 2026-04-06 03:37:44.091215 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-06 03:37:44.091221 | orchestrator | Monday 06 April 2026 03:37:04 +0000 (0:00:00.481) 0:00:01.138 ********** 2026-04-06 03:37:44.091228 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:37:44.091235 | orchestrator | 2026-04-06 03:37:44.091241 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-06 03:37:44.091247 | orchestrator | Monday 06 April 2026 03:37:05 +0000 (0:00:00.639) 0:00:01.778 ********** 2026-04-06 03:37:44.091254 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-06 03:37:44.091261 | orchestrator | 2026-04-06 03:37:44.091267 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-06 03:37:44.091273 | orchestrator | Monday 06 April 2026 03:37:09 +0000 (0:00:03.557) 0:00:05.335 ********** 2026-04-06 03:37:44.091279 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-06 03:37:44.091286 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-06 03:37:44.091292 | orchestrator | 2026-04-06 03:37:44.091298 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-06 03:37:44.091304 | orchestrator | Monday 06 April 2026 03:37:15 +0000 (0:00:06.549) 0:00:11.885 ********** 2026-04-06 03:37:44.091311 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:37:44.091317 | orchestrator | 2026-04-06 03:37:44.091323 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-06 03:37:44.091329 | orchestrator | Monday 06 April 2026 03:37:18 +0000 (0:00:03.163) 0:00:15.049 ********** 2026-04-06 03:37:44.091336 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:37:44.091343 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-06 03:37:44.091349 | orchestrator | 2026-04-06 03:37:44.091355 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-06 03:37:44.091361 | orchestrator | Monday 06 April 2026 03:37:22 +0000 (0:00:04.076) 0:00:19.126 ********** 2026-04-06 03:37:44.091368 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:37:44.091374 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-06 03:37:44.091381 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-06 03:37:44.091387 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-06 03:37:44.091393 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-06 03:37:44.091400 | orchestrator | 2026-04-06 03:37:44.091419 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-06 03:37:44.091426 | orchestrator | Monday 06 April 2026 03:37:38 +0000 (0:00:15.581) 0:00:34.707 ********** 2026-04-06 03:37:44.091432 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-06 03:37:44.091445 | orchestrator | 2026-04-06 03:37:44.091451 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-06 03:37:44.091457 | orchestrator | Monday 06 April 2026 03:37:42 +0000 (0:00:03.848) 0:00:38.555 ********** 2026-04-06 03:37:44.091466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:37:44.091490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:37:44.091498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:37:44.091505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:44.091518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:44.091530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:44.091542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:50.242981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:50.243100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:50.243117 | orchestrator | 2026-04-06 03:37:50.243127 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-06 03:37:50.243136 | orchestrator | Monday 06 April 2026 03:37:44 +0000 (0:00:01.803) 0:00:40.359 ********** 2026-04-06 03:37:50.243143 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-06 03:37:50.243151 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-06 03:37:50.243157 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-06 03:37:50.243164 | orchestrator | 2026-04-06 03:37:50.243172 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-06 03:37:50.243178 | orchestrator | Monday 06 April 2026 03:37:45 +0000 (0:00:01.221) 0:00:41.580 ********** 2026-04-06 03:37:50.243185 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:37:50.243193 | orchestrator | 2026-04-06 03:37:50.243216 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-06 03:37:50.243237 | orchestrator | Monday 06 April 2026 03:37:45 +0000 (0:00:00.366) 0:00:41.947 ********** 2026-04-06 03:37:50.243279 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:37:50.243295 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:37:50.243306 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:37:50.243316 | orchestrator | 2026-04-06 03:37:50.243326 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-06 03:37:50.243339 | orchestrator | Monday 06 April 2026 03:37:46 +0000 (0:00:00.344) 0:00:42.291 ********** 2026-04-06 03:37:50.243351 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:37:50.243364 | orchestrator | 2026-04-06 03:37:50.243394 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-06 03:37:50.243429 | orchestrator | Monday 06 April 2026 03:37:46 +0000 (0:00:00.629) 0:00:42.920 ********** 2026-04-06 03:37:50.243444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:37:50.243480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:37:50.243493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:37:50.243504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:50.243534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:50.243552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:50.243559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:50.243573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:51.807277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:37:51.807374 | orchestrator | 2026-04-06 03:37:51.807387 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-06 03:37:51.807398 | orchestrator | Monday 06 April 2026 03:37:50 +0000 (0:00:03.588) 0:00:46.509 ********** 2026-04-06 03:37:51.807411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 03:37:51.807461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:37:51.807472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:37:51.807481 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:37:51.807582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 03:37:51.807611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:37:51.807621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:37:51.807639 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:37:51.807653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 03:37:51.807662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:37:51.807735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:37:51.807745 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:37:51.807754 | orchestrator | 2026-04-06 03:37:51.807763 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-06 03:37:51.807772 | orchestrator | Monday 06 April 2026 03:37:50 +0000 (0:00:00.669) 0:00:47.179 ********** 2026-04-06 03:37:51.807787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 03:37:55.436153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:37:55.436230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:37:55.436239 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:37:55.436259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 03:37:55.436266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:37:55.436271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:37:55.436276 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:37:55.436293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 03:37:55.436320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:37:55.436329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:37:55.436334 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:37:55.436339 | orchestrator | 2026-04-06 03:37:55.436345 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-06 03:37:55.436351 | orchestrator | Monday 06 April 2026 03:37:51 +0000 (0:00:00.900) 0:00:48.079 ********** 2026-04-06 03:37:55.436356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:37:55.436361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:37:55.436375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:38:05.564461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:05.564577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:05.564591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:05.564601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:05.564612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:05.564645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:05.564655 | orchestrator | 2026-04-06 03:38:05.564666 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-06 03:38:05.564729 | orchestrator | Monday 06 April 2026 03:37:55 +0000 (0:00:03.620) 0:00:51.700 ********** 2026-04-06 03:38:05.564750 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:38:05.564761 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:38:05.564777 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:38:05.564786 | orchestrator | 2026-04-06 03:38:05.564810 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-06 03:38:05.564820 | orchestrator | Monday 06 April 2026 03:37:56 +0000 (0:00:01.573) 0:00:53.273 ********** 2026-04-06 03:38:05.564830 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:38:05.564839 | orchestrator | 2026-04-06 03:38:05.564847 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-06 03:38:05.564856 | orchestrator | Monday 06 April 2026 03:37:58 +0000 (0:00:01.024) 0:00:54.298 ********** 2026-04-06 03:38:05.564865 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:38:05.564873 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:38:05.564881 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:38:05.564890 | orchestrator | 2026-04-06 03:38:05.564898 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-06 03:38:05.564907 | orchestrator | Monday 06 April 2026 03:37:58 +0000 (0:00:00.665) 0:00:54.963 ********** 2026-04-06 03:38:05.564949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:38:05.564960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:38:05.564978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:38:05.564994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:06.527130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:06.527260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:06.527277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:06.527287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:06.527317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:06.527328 | orchestrator | 2026-04-06 03:38:06.527338 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-06 03:38:06.527350 | orchestrator | Monday 06 April 2026 03:38:05 +0000 (0:00:06.874) 0:01:01.838 ********** 2026-04-06 03:38:06.527379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 03:38:06.527390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:38:06.527407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:38:06.527417 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:38:06.527428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 03:38:06.527449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:38:06.527459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:38:06.527468 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:38:06.527486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-06 03:38:09.007248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:38:09.007336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:38:09.007366 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:38:09.007375 | orchestrator | 2026-04-06 03:38:09.007383 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-06 03:38:09.007391 | orchestrator | Monday 06 April 2026 03:38:06 +0000 (0:00:00.953) 0:01:02.792 ********** 2026-04-06 03:38:09.007400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:38:09.007412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:38:09.007440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-06 03:38:09.007459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:09.007480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:09.007491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:09.007502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:09.007513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:09.007524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:38:09.007534 | orchestrator | 2026-04-06 03:38:09.007546 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-06 03:38:09.007558 | orchestrator | Monday 06 April 2026 03:38:08 +0000 (0:00:02.478) 0:01:05.271 ********** 2026-04-06 03:38:52.915357 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:38:52.915496 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:38:52.915514 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:38:52.915530 | orchestrator | 2026-04-06 03:38:52.915545 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-06 03:38:52.915577 | orchestrator | Monday 06 April 2026 03:38:09 +0000 (0:00:00.340) 0:01:05.611 ********** 2026-04-06 03:38:52.915620 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:38:52.915635 | orchestrator | 2026-04-06 03:38:52.915649 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-06 03:38:52.915662 | orchestrator | Monday 06 April 2026 03:38:11 +0000 (0:00:02.080) 0:01:07.692 ********** 2026-04-06 03:38:52.915673 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:38:52.915712 | orchestrator | 2026-04-06 03:38:52.915725 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-06 03:38:52.915739 | orchestrator | Monday 06 April 2026 03:38:13 +0000 (0:00:02.355) 0:01:10.048 ********** 2026-04-06 03:38:52.915751 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:38:52.915764 | orchestrator | 2026-04-06 03:38:52.915777 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-06 03:38:52.915791 | orchestrator | Monday 06 April 2026 03:38:26 +0000 (0:00:13.217) 0:01:23.265 ********** 2026-04-06 03:38:52.915804 | orchestrator | 2026-04-06 03:38:52.915818 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-06 03:38:52.915831 | orchestrator | Monday 06 April 2026 03:38:27 +0000 (0:00:00.083) 0:01:23.349 ********** 2026-04-06 03:38:52.915844 | orchestrator | 2026-04-06 03:38:52.915857 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-06 03:38:52.915870 | orchestrator | Monday 06 April 2026 03:38:27 +0000 (0:00:00.085) 0:01:23.434 ********** 2026-04-06 03:38:52.915884 | orchestrator | 2026-04-06 03:38:52.915898 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-06 03:38:52.915911 | orchestrator | Monday 06 April 2026 03:38:27 +0000 (0:00:00.072) 0:01:23.507 ********** 2026-04-06 03:38:52.915925 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:38:52.915938 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:38:52.915951 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:38:52.915965 | orchestrator | 2026-04-06 03:38:52.915978 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-06 03:38:52.915992 | orchestrator | Monday 06 April 2026 03:38:33 +0000 (0:00:06.644) 0:01:30.151 ********** 2026-04-06 03:38:52.916004 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:38:52.916017 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:38:52.916030 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:38:52.916043 | orchestrator | 2026-04-06 03:38:52.916056 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-06 03:38:52.916069 | orchestrator | Monday 06 April 2026 03:38:44 +0000 (0:00:10.215) 0:01:40.367 ********** 2026-04-06 03:38:52.916083 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:38:52.916096 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:38:52.916105 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:38:52.916112 | orchestrator | 2026-04-06 03:38:52.916121 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:38:52.916130 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 03:38:52.916139 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 03:38:52.916147 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 03:38:52.916155 | orchestrator | 2026-04-06 03:38:52.916163 | orchestrator | 2026-04-06 03:38:52.916171 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:38:52.916179 | orchestrator | Monday 06 April 2026 03:38:52 +0000 (0:00:08.416) 0:01:48.783 ********** 2026-04-06 03:38:52.916187 | orchestrator | =============================================================================== 2026-04-06 03:38:52.916195 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.58s 2026-04-06 03:38:52.916203 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.22s 2026-04-06 03:38:52.916220 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.22s 2026-04-06 03:38:52.916228 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.42s 2026-04-06 03:38:52.916236 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.87s 2026-04-06 03:38:52.916243 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.64s 2026-04-06 03:38:52.916251 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.55s 2026-04-06 03:38:52.916259 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.08s 2026-04-06 03:38:52.916267 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.85s 2026-04-06 03:38:52.916275 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.62s 2026-04-06 03:38:52.916283 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.59s 2026-04-06 03:38:52.916291 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.56s 2026-04-06 03:38:52.916299 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.16s 2026-04-06 03:38:52.916307 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.48s 2026-04-06 03:38:52.916315 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.36s 2026-04-06 03:38:52.916345 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.08s 2026-04-06 03:38:52.916357 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.80s 2026-04-06 03:38:52.916369 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.57s 2026-04-06 03:38:52.916382 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.22s 2026-04-06 03:38:52.916402 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.02s 2026-04-06 03:38:55.707034 | orchestrator | 2026-04-06 03:38:55 | INFO  | Task a72c0877-107c-41d5-b5b0-5744872f7339 (designate) was prepared for execution. 2026-04-06 03:38:55.707130 | orchestrator | 2026-04-06 03:38:55 | INFO  | It takes a moment until task a72c0877-107c-41d5-b5b0-5744872f7339 (designate) has been started and output is visible here. 2026-04-06 03:39:28.640032 | orchestrator | 2026-04-06 03:39:28.640132 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:39:28.640144 | orchestrator | 2026-04-06 03:39:28.640152 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:39:28.640161 | orchestrator | Monday 06 April 2026 03:39:00 +0000 (0:00:00.311) 0:00:00.311 ********** 2026-04-06 03:39:28.640168 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:39:28.640176 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:39:28.640184 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:39:28.640191 | orchestrator | 2026-04-06 03:39:28.640199 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:39:28.640206 | orchestrator | Monday 06 April 2026 03:39:00 +0000 (0:00:00.343) 0:00:00.654 ********** 2026-04-06 03:39:28.640214 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-06 03:39:28.640222 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-06 03:39:28.640229 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-06 03:39:28.640237 | orchestrator | 2026-04-06 03:39:28.640244 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-06 03:39:28.640251 | orchestrator | 2026-04-06 03:39:28.640259 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-06 03:39:28.640266 | orchestrator | Monday 06 April 2026 03:39:01 +0000 (0:00:00.507) 0:00:01.162 ********** 2026-04-06 03:39:28.640275 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:39:28.640288 | orchestrator | 2026-04-06 03:39:28.640301 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-06 03:39:28.640341 | orchestrator | Monday 06 April 2026 03:39:01 +0000 (0:00:00.637) 0:00:01.799 ********** 2026-04-06 03:39:28.640355 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-06 03:39:28.640367 | orchestrator | 2026-04-06 03:39:28.640379 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-06 03:39:28.640393 | orchestrator | Monday 06 April 2026 03:39:05 +0000 (0:00:03.479) 0:00:05.279 ********** 2026-04-06 03:39:28.640407 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-06 03:39:28.640421 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-06 03:39:28.640436 | orchestrator | 2026-04-06 03:39:28.640449 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-06 03:39:28.640462 | orchestrator | Monday 06 April 2026 03:39:11 +0000 (0:00:06.412) 0:00:11.691 ********** 2026-04-06 03:39:28.640476 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:39:28.640489 | orchestrator | 2026-04-06 03:39:28.640501 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-06 03:39:28.640508 | orchestrator | Monday 06 April 2026 03:39:15 +0000 (0:00:03.425) 0:00:15.116 ********** 2026-04-06 03:39:28.640516 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:39:28.640523 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-06 03:39:28.640530 | orchestrator | 2026-04-06 03:39:28.640554 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-06 03:39:28.640570 | orchestrator | Monday 06 April 2026 03:39:19 +0000 (0:00:04.040) 0:00:19.156 ********** 2026-04-06 03:39:28.640579 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:39:28.640588 | orchestrator | 2026-04-06 03:39:28.640597 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-06 03:39:28.640605 | orchestrator | Monday 06 April 2026 03:39:22 +0000 (0:00:03.328) 0:00:22.485 ********** 2026-04-06 03:39:28.640614 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-06 03:39:28.640623 | orchestrator | 2026-04-06 03:39:28.640631 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-06 03:39:28.640639 | orchestrator | Monday 06 April 2026 03:39:26 +0000 (0:00:03.804) 0:00:26.290 ********** 2026-04-06 03:39:28.640651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:28.640720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:28.640742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:28.640753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:39:28.640763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:39:28.640772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:39:28.640785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:28.640802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:35.010421 | orchestrator | 2026-04-06 03:39:35.010428 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-06 03:39:35.010435 | orchestrator | Monday 06 April 2026 03:39:29 +0000 (0:00:03.019) 0:00:29.309 ********** 2026-04-06 03:39:35.010441 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:39:35.010448 | orchestrator | 2026-04-06 03:39:35.010454 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-06 03:39:35.010460 | orchestrator | Monday 06 April 2026 03:39:29 +0000 (0:00:00.154) 0:00:29.464 ********** 2026-04-06 03:39:35.010466 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:39:35.010472 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:39:35.010478 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:39:35.010484 | orchestrator | 2026-04-06 03:39:35.010490 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-06 03:39:35.010496 | orchestrator | Monday 06 April 2026 03:39:30 +0000 (0:00:00.574) 0:00:30.039 ********** 2026-04-06 03:39:35.010502 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:39:35.010509 | orchestrator | 2026-04-06 03:39:35.010514 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-06 03:39:35.010520 | orchestrator | Monday 06 April 2026 03:39:30 +0000 (0:00:00.581) 0:00:30.620 ********** 2026-04-06 03:39:35.010537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:35.010550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:36.801188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:36.801317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:39:36.801344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:39:36.802414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:39:36.802470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:36.802506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:36.802517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:36.802526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:36.802538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:36.802548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:36.802573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:36.802583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:36.802602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:37.761214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:37.761320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:37.761336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:37.761373 | orchestrator | 2026-04-06 03:39:37.761387 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-06 03:39:37.761400 | orchestrator | Monday 06 April 2026 03:39:36 +0000 (0:00:05.995) 0:00:36.616 ********** 2026-04-06 03:39:37.761429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:39:37.761468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 03:39:37.761501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:39:37.761514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:39:37.761526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:39:37.761538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:39:37.761558 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:39:37.761577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:39:37.761589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 03:39:37.761601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:39:37.761620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.611563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.611742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.611763 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:39:38.611794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:39:38.611807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 03:39:38.611818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.611829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.611859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.611881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.611891 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:39:38.611901 | orchestrator | 2026-04-06 03:39:38.611912 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-06 03:39:38.611923 | orchestrator | Monday 06 April 2026 03:39:37 +0000 (0:00:01.072) 0:00:37.688 ********** 2026-04-06 03:39:38.611939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:39:38.611950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 03:39:38.611961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.611978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.976568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.976725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.976742 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:39:38.976777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:39:38.976801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 03:39:38.976816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.976830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.976911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.976929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.976943 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:39:38.976965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:39:38.976980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 03:39:38.976994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.977008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:39:38.977042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:39:43.346669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:39:43.346866 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:39:43.346891 | orchestrator | 2026-04-06 03:39:43.346906 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-06 03:39:43.346923 | orchestrator | Monday 06 April 2026 03:39:38 +0000 (0:00:01.103) 0:00:38.791 ********** 2026-04-06 03:39:43.346958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:43.346976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:43.346993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:43.347059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:39:43.347079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:39:43.347094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:39:43.347117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:43.347132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:43.347148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:43.347175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:43.347206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:55.321939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:55.322092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:55.322106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:55.322114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:55.322142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:55.322149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:55.322170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:39:55.322177 | orchestrator | 2026-04-06 03:39:55.322184 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-06 03:39:55.322191 | orchestrator | Monday 06 April 2026 03:39:45 +0000 (0:00:06.145) 0:00:44.937 ********** 2026-04-06 03:39:55.322202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:55.322212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:55.322280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:39:55.322294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:39:55.322309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:03.960428 | orchestrator | 2026-04-06 03:40:03.960441 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-06 03:40:03.960454 | orchestrator | Monday 06 April 2026 03:40:00 +0000 (0:00:15.053) 0:00:59.991 ********** 2026-04-06 03:40:03.960471 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-06 03:40:08.432541 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-06 03:40:08.432678 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-06 03:40:08.432809 | orchestrator | 2026-04-06 03:40:08.432833 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-06 03:40:08.432852 | orchestrator | Monday 06 April 2026 03:40:03 +0000 (0:00:03.785) 0:01:03.776 ********** 2026-04-06 03:40:08.432870 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-06 03:40:08.432890 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-06 03:40:08.432908 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-06 03:40:08.432926 | orchestrator | 2026-04-06 03:40:08.432944 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-06 03:40:08.432956 | orchestrator | Monday 06 April 2026 03:40:06 +0000 (0:00:02.563) 0:01:06.339 ********** 2026-04-06 03:40:08.432990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:40:08.433056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:40:08.433069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:40:08.433102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:08.433115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:40:08.433147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:40:08.433179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:40:08.433191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:08.433203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:40:08.433215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:40:08.433235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:40:11.288949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:11.289066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:40:11.289077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:40:11.289084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:40:11.289089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:11.289094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:11.289114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:11.289119 | orchestrator | 2026-04-06 03:40:11.289125 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-06 03:40:11.289136 | orchestrator | Monday 06 April 2026 03:40:09 +0000 (0:00:02.947) 0:01:09.287 ********** 2026-04-06 03:40:11.289147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:40:11.289154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:40:11.289159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:40:11.289164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:11.289172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:40:12.321471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:40:12.321586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:40:12.321600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:12.321610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:40:12.321620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:40:12.321628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:40:12.321653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:12.321687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:40:12.321761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:40:12.321772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:40:12.321781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:12.321789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:12.321798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:12.321813 | orchestrator | 2026-04-06 03:40:12.321823 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-06 03:40:12.321838 | orchestrator | Monday 06 April 2026 03:40:12 +0000 (0:00:02.846) 0:01:12.134 ********** 2026-04-06 03:40:13.364065 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:40:13.364191 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:40:13.364215 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:40:13.364233 | orchestrator | 2026-04-06 03:40:13.364250 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-06 03:40:13.364267 | orchestrator | Monday 06 April 2026 03:40:12 +0000 (0:00:00.328) 0:01:12.462 ********** 2026-04-06 03:40:13.364308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:40:13.364332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 03:40:13.364349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:40:13.364365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:40:13.364381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:40:13.364450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:40:13.364470 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:40:13.364494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:40:13.364512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 03:40:13.364528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:40:13.364543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:40:13.364570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:40:13.364596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:40:16.823632 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:40:16.823779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-06 03:40:16.823794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 03:40:16.823801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 03:40:16.823808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 03:40:16.823831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 03:40:16.823837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:40:16.823842 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:40:16.823847 | orchestrator | 2026-04-06 03:40:16.823865 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-06 03:40:16.823872 | orchestrator | Monday 06 April 2026 03:40:13 +0000 (0:00:00.840) 0:01:13.303 ********** 2026-04-06 03:40:16.823880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:40:16.823886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:40:16.823891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-06 03:40:16.823901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:16.823910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:18.742993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:40:18.743314 | orchestrator | 2026-04-06 03:40:18.743328 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-06 03:40:18.743341 | orchestrator | Monday 06 April 2026 03:40:18 +0000 (0:00:04.671) 0:01:17.974 ********** 2026-04-06 03:40:18.743352 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:40:18.743371 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:41:44.873653 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:41:44.873849 | orchestrator | 2026-04-06 03:41:44.873871 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-06 03:41:44.873885 | orchestrator | Monday 06 April 2026 03:40:18 +0000 (0:00:00.589) 0:01:18.563 ********** 2026-04-06 03:41:44.873897 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-06 03:41:44.873909 | orchestrator | 2026-04-06 03:41:44.873937 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-06 03:41:44.873949 | orchestrator | Monday 06 April 2026 03:40:20 +0000 (0:00:02.112) 0:01:20.676 ********** 2026-04-06 03:41:44.873961 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-06 03:41:44.873972 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-06 03:41:44.873983 | orchestrator | 2026-04-06 03:41:44.873994 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-06 03:41:44.874007 | orchestrator | Monday 06 April 2026 03:40:23 +0000 (0:00:02.335) 0:01:23.011 ********** 2026-04-06 03:41:44.874113 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:41:44.874137 | orchestrator | 2026-04-06 03:41:44.874158 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-06 03:41:44.874177 | orchestrator | Monday 06 April 2026 03:40:39 +0000 (0:00:16.020) 0:01:39.032 ********** 2026-04-06 03:41:44.874198 | orchestrator | 2026-04-06 03:41:44.874219 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-06 03:41:44.874240 | orchestrator | Monday 06 April 2026 03:40:39 +0000 (0:00:00.073) 0:01:39.106 ********** 2026-04-06 03:41:44.874263 | orchestrator | 2026-04-06 03:41:44.874285 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-06 03:41:44.874335 | orchestrator | Monday 06 April 2026 03:40:39 +0000 (0:00:00.077) 0:01:39.183 ********** 2026-04-06 03:41:44.874349 | orchestrator | 2026-04-06 03:41:44.874363 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-06 03:41:44.874376 | orchestrator | Monday 06 April 2026 03:40:39 +0000 (0:00:00.082) 0:01:39.266 ********** 2026-04-06 03:41:44.874389 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:41:44.874402 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:41:44.874415 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:41:44.874429 | orchestrator | 2026-04-06 03:41:44.874442 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-06 03:41:44.874454 | orchestrator | Monday 06 April 2026 03:40:48 +0000 (0:00:09.091) 0:01:48.357 ********** 2026-04-06 03:41:44.874470 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:41:44.874488 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:41:44.874515 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:41:44.874537 | orchestrator | 2026-04-06 03:41:44.874555 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-06 03:41:44.874573 | orchestrator | Monday 06 April 2026 03:40:59 +0000 (0:00:11.041) 0:01:59.399 ********** 2026-04-06 03:41:44.874589 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:41:44.874605 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:41:44.874622 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:41:44.874639 | orchestrator | 2026-04-06 03:41:44.874657 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-06 03:41:44.874676 | orchestrator | Monday 06 April 2026 03:41:10 +0000 (0:00:11.001) 0:02:10.401 ********** 2026-04-06 03:41:44.874694 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:41:44.874712 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:41:44.874758 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:41:44.874776 | orchestrator | 2026-04-06 03:41:44.874794 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-06 03:41:44.874812 | orchestrator | Monday 06 April 2026 03:41:16 +0000 (0:00:05.845) 0:02:16.246 ********** 2026-04-06 03:41:44.874844 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:41:44.874863 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:41:44.874881 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:41:44.874898 | orchestrator | 2026-04-06 03:41:44.874916 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-06 03:41:44.874935 | orchestrator | Monday 06 April 2026 03:41:25 +0000 (0:00:09.246) 0:02:25.492 ********** 2026-04-06 03:41:44.874954 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:41:44.874972 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:41:44.874990 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:41:44.875010 | orchestrator | 2026-04-06 03:41:44.875029 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-06 03:41:44.875047 | orchestrator | Monday 06 April 2026 03:41:36 +0000 (0:00:11.254) 0:02:36.747 ********** 2026-04-06 03:41:44.875066 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:41:44.875078 | orchestrator | 2026-04-06 03:41:44.875089 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:41:44.875102 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 03:41:44.875114 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 03:41:44.875126 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 03:41:44.875136 | orchestrator | 2026-04-06 03:41:44.875147 | orchestrator | 2026-04-06 03:41:44.875158 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:41:44.875169 | orchestrator | Monday 06 April 2026 03:41:44 +0000 (0:00:07.490) 0:02:44.237 ********** 2026-04-06 03:41:44.875206 | orchestrator | =============================================================================== 2026-04-06 03:41:44.875234 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.02s 2026-04-06 03:41:44.875252 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.05s 2026-04-06 03:41:44.875298 | orchestrator | designate : Restart designate-worker container ------------------------- 11.25s 2026-04-06 03:41:44.875317 | orchestrator | designate : Restart designate-api container ---------------------------- 11.04s 2026-04-06 03:41:44.875334 | orchestrator | designate : Restart designate-central container ------------------------ 11.00s 2026-04-06 03:41:44.875365 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.25s 2026-04-06 03:41:44.875384 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.09s 2026-04-06 03:41:44.875404 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.49s 2026-04-06 03:41:44.875422 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.41s 2026-04-06 03:41:44.875440 | orchestrator | designate : Copying over config.json files for services ----------------- 6.15s 2026-04-06 03:41:44.875459 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.00s 2026-04-06 03:41:44.875476 | orchestrator | designate : Restart designate-producer container ------------------------ 5.85s 2026-04-06 03:41:44.875495 | orchestrator | designate : Check designate containers ---------------------------------- 4.67s 2026-04-06 03:41:44.875514 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.04s 2026-04-06 03:41:44.875533 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.80s 2026-04-06 03:41:44.875552 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.79s 2026-04-06 03:41:44.875592 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.48s 2026-04-06 03:41:44.875623 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.43s 2026-04-06 03:41:44.875654 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.33s 2026-04-06 03:41:44.875672 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.02s 2026-04-06 03:41:47.415252 | orchestrator | 2026-04-06 03:41:47 | INFO  | Task a7cb0b28-731a-4437-b43f-10caee508023 (octavia) was prepared for execution. 2026-04-06 03:41:47.415363 | orchestrator | 2026-04-06 03:41:47 | INFO  | It takes a moment until task a7cb0b28-731a-4437-b43f-10caee508023 (octavia) has been started and output is visible here. 2026-04-06 03:43:57.579527 | orchestrator | 2026-04-06 03:43:57.579661 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:43:57.579686 | orchestrator | 2026-04-06 03:43:57.579698 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:43:57.579709 | orchestrator | Monday 06 April 2026 03:41:52 +0000 (0:00:00.267) 0:00:00.267 ********** 2026-04-06 03:43:57.579719 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:43:57.579731 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:43:57.579740 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:43:57.579776 | orchestrator | 2026-04-06 03:43:57.579787 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:43:57.579797 | orchestrator | Monday 06 April 2026 03:41:52 +0000 (0:00:00.357) 0:00:00.624 ********** 2026-04-06 03:43:57.579807 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-06 03:43:57.579818 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-06 03:43:57.579828 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-06 03:43:57.579838 | orchestrator | 2026-04-06 03:43:57.579848 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-06 03:43:57.579858 | orchestrator | 2026-04-06 03:43:57.579868 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-06 03:43:57.579879 | orchestrator | Monday 06 April 2026 03:41:52 +0000 (0:00:00.544) 0:00:01.169 ********** 2026-04-06 03:43:57.579917 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:43:57.579931 | orchestrator | 2026-04-06 03:43:57.579948 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-06 03:43:57.579965 | orchestrator | Monday 06 April 2026 03:41:53 +0000 (0:00:00.645) 0:00:01.815 ********** 2026-04-06 03:43:57.579983 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-06 03:43:57.579999 | orchestrator | 2026-04-06 03:43:57.580011 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-06 03:43:57.580022 | orchestrator | Monday 06 April 2026 03:41:57 +0000 (0:00:03.531) 0:00:05.347 ********** 2026-04-06 03:43:57.580034 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-06 03:43:57.580045 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-06 03:43:57.580057 | orchestrator | 2026-04-06 03:43:57.580069 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-06 03:43:57.580080 | orchestrator | Monday 06 April 2026 03:42:03 +0000 (0:00:06.695) 0:00:12.042 ********** 2026-04-06 03:43:57.580091 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:43:57.580102 | orchestrator | 2026-04-06 03:43:57.580113 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-06 03:43:57.580129 | orchestrator | Monday 06 April 2026 03:42:06 +0000 (0:00:03.175) 0:00:15.217 ********** 2026-04-06 03:43:57.580146 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:43:57.580162 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-06 03:43:57.580178 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-06 03:43:57.580195 | orchestrator | 2026-04-06 03:43:57.580213 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-06 03:43:57.580229 | orchestrator | Monday 06 April 2026 03:42:15 +0000 (0:00:08.319) 0:00:23.536 ********** 2026-04-06 03:43:57.580245 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:43:57.580257 | orchestrator | 2026-04-06 03:43:57.580268 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-06 03:43:57.580281 | orchestrator | Monday 06 April 2026 03:42:18 +0000 (0:00:03.295) 0:00:26.832 ********** 2026-04-06 03:43:57.580310 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-06 03:43:57.580321 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-06 03:43:57.580332 | orchestrator | 2026-04-06 03:43:57.580343 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-06 03:43:57.580355 | orchestrator | Monday 06 April 2026 03:42:25 +0000 (0:00:07.403) 0:00:34.235 ********** 2026-04-06 03:43:57.580366 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-06 03:43:57.580377 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-06 03:43:57.580386 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-06 03:43:57.580396 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-06 03:43:57.580406 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-06 03:43:57.580415 | orchestrator | 2026-04-06 03:43:57.580425 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-06 03:43:57.580434 | orchestrator | Monday 06 April 2026 03:42:41 +0000 (0:00:15.976) 0:00:50.212 ********** 2026-04-06 03:43:57.580444 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:43:57.580454 | orchestrator | 2026-04-06 03:43:57.580464 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-06 03:43:57.580473 | orchestrator | Monday 06 April 2026 03:42:42 +0000 (0:00:00.895) 0:00:51.107 ********** 2026-04-06 03:43:57.580493 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:43:57.580503 | orchestrator | 2026-04-06 03:43:57.580512 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-06 03:43:57.580522 | orchestrator | Monday 06 April 2026 03:42:47 +0000 (0:00:04.921) 0:00:56.028 ********** 2026-04-06 03:43:57.580531 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:43:57.580541 | orchestrator | 2026-04-06 03:43:57.580551 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-06 03:43:57.580581 | orchestrator | Monday 06 April 2026 03:42:52 +0000 (0:00:04.766) 0:01:00.794 ********** 2026-04-06 03:43:57.580592 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:43:57.580601 | orchestrator | 2026-04-06 03:43:57.580611 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-06 03:43:57.580621 | orchestrator | Monday 06 April 2026 03:42:55 +0000 (0:00:03.383) 0:01:04.178 ********** 2026-04-06 03:43:57.580630 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-06 03:43:57.580640 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-06 03:43:57.580649 | orchestrator | 2026-04-06 03:43:57.580659 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-06 03:43:57.580669 | orchestrator | Monday 06 April 2026 03:43:05 +0000 (0:00:09.809) 0:01:13.987 ********** 2026-04-06 03:43:57.580678 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-06 03:43:57.580688 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-06 03:43:57.580699 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-06 03:43:57.580710 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-06 03:43:57.580720 | orchestrator | 2026-04-06 03:43:57.580730 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-06 03:43:57.580740 | orchestrator | Monday 06 April 2026 03:43:21 +0000 (0:00:16.206) 0:01:30.194 ********** 2026-04-06 03:43:57.580775 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:43:57.580790 | orchestrator | 2026-04-06 03:43:57.580800 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-06 03:43:57.580810 | orchestrator | Monday 06 April 2026 03:43:26 +0000 (0:00:04.859) 0:01:35.053 ********** 2026-04-06 03:43:57.580820 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:43:57.580829 | orchestrator | 2026-04-06 03:43:57.580839 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-06 03:43:57.580848 | orchestrator | Monday 06 April 2026 03:43:33 +0000 (0:00:06.445) 0:01:41.498 ********** 2026-04-06 03:43:57.580858 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:43:57.580868 | orchestrator | 2026-04-06 03:43:57.580878 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-06 03:43:57.580887 | orchestrator | Monday 06 April 2026 03:43:33 +0000 (0:00:00.223) 0:01:41.722 ********** 2026-04-06 03:43:57.580897 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:43:57.580907 | orchestrator | 2026-04-06 03:43:57.580916 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-06 03:43:57.580926 | orchestrator | Monday 06 April 2026 03:43:38 +0000 (0:00:04.641) 0:01:46.364 ********** 2026-04-06 03:43:57.580936 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:43:57.580946 | orchestrator | 2026-04-06 03:43:57.580955 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-06 03:43:57.580965 | orchestrator | Monday 06 April 2026 03:43:39 +0000 (0:00:01.248) 0:01:47.613 ********** 2026-04-06 03:43:57.580975 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:43:57.580984 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:43:57.581001 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:43:57.581011 | orchestrator | 2026-04-06 03:43:57.581021 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-06 03:43:57.581030 | orchestrator | Monday 06 April 2026 03:43:44 +0000 (0:00:05.511) 0:01:53.124 ********** 2026-04-06 03:43:57.581040 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:43:57.581068 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:43:57.581088 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:43:57.581097 | orchestrator | 2026-04-06 03:43:57.581107 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-06 03:43:57.581117 | orchestrator | Monday 06 April 2026 03:43:49 +0000 (0:00:04.679) 0:01:57.803 ********** 2026-04-06 03:43:57.581127 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:43:57.581136 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:43:57.581146 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:43:57.581155 | orchestrator | 2026-04-06 03:43:57.581165 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-06 03:43:57.581175 | orchestrator | Monday 06 April 2026 03:43:50 +0000 (0:00:01.113) 0:01:58.916 ********** 2026-04-06 03:43:57.581184 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:43:57.581194 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:43:57.581204 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:43:57.581213 | orchestrator | 2026-04-06 03:43:57.581223 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-06 03:43:57.581232 | orchestrator | Monday 06 April 2026 03:43:52 +0000 (0:00:02.009) 0:02:00.926 ********** 2026-04-06 03:43:57.581242 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:43:57.581252 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:43:57.581261 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:43:57.581271 | orchestrator | 2026-04-06 03:43:57.581281 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-06 03:43:57.581291 | orchestrator | Monday 06 April 2026 03:43:53 +0000 (0:00:01.326) 0:02:02.252 ********** 2026-04-06 03:43:57.581300 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:43:57.581310 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:43:57.581319 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:43:57.581329 | orchestrator | 2026-04-06 03:43:57.581338 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-06 03:43:57.581348 | orchestrator | Monday 06 April 2026 03:43:55 +0000 (0:00:01.303) 0:02:03.556 ********** 2026-04-06 03:43:57.581358 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:43:57.581367 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:43:57.581377 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:43:57.581387 | orchestrator | 2026-04-06 03:43:57.581403 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-06 03:44:23.876862 | orchestrator | Monday 06 April 2026 03:43:57 +0000 (0:00:02.277) 0:02:05.833 ********** 2026-04-06 03:44:23.876960 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:44:23.876971 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:44:23.876978 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:44:23.876985 | orchestrator | 2026-04-06 03:44:23.876992 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-06 03:44:23.876999 | orchestrator | Monday 06 April 2026 03:43:59 +0000 (0:00:01.686) 0:02:07.520 ********** 2026-04-06 03:44:23.877006 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:44:23.877013 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:44:23.877019 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:44:23.877026 | orchestrator | 2026-04-06 03:44:23.877032 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-06 03:44:23.877039 | orchestrator | Monday 06 April 2026 03:43:59 +0000 (0:00:00.649) 0:02:08.169 ********** 2026-04-06 03:44:23.877045 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:44:23.877052 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:44:23.877058 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:44:23.877081 | orchestrator | 2026-04-06 03:44:23.877088 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-06 03:44:23.877094 | orchestrator | Monday 06 April 2026 03:44:03 +0000 (0:00:03.178) 0:02:11.348 ********** 2026-04-06 03:44:23.877101 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:44:23.877108 | orchestrator | 2026-04-06 03:44:23.877114 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-06 03:44:23.877124 | orchestrator | Monday 06 April 2026 03:44:03 +0000 (0:00:00.575) 0:02:11.923 ********** 2026-04-06 03:44:23.877133 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:44:23.877143 | orchestrator | 2026-04-06 03:44:23.877153 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-06 03:44:23.877162 | orchestrator | Monday 06 April 2026 03:44:07 +0000 (0:00:03.651) 0:02:15.575 ********** 2026-04-06 03:44:23.877172 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:44:23.877182 | orchestrator | 2026-04-06 03:44:23.877191 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-06 03:44:23.877201 | orchestrator | Monday 06 April 2026 03:44:10 +0000 (0:00:02.961) 0:02:18.537 ********** 2026-04-06 03:44:23.877211 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-06 03:44:23.877221 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-06 03:44:23.877233 | orchestrator | 2026-04-06 03:44:23.877243 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-06 03:44:23.877252 | orchestrator | Monday 06 April 2026 03:44:16 +0000 (0:00:06.417) 0:02:24.954 ********** 2026-04-06 03:44:23.877262 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:44:23.877272 | orchestrator | 2026-04-06 03:44:23.877282 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-06 03:44:23.877292 | orchestrator | Monday 06 April 2026 03:44:21 +0000 (0:00:04.360) 0:02:29.314 ********** 2026-04-06 03:44:23.877303 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:44:23.877314 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:44:23.877325 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:44:23.877336 | orchestrator | 2026-04-06 03:44:23.877347 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-06 03:44:23.877358 | orchestrator | Monday 06 April 2026 03:44:21 +0000 (0:00:00.617) 0:02:29.931 ********** 2026-04-06 03:44:23.877391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:23.877431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:23.877454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:23.877466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:44:23.877479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:44:23.877490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:44:23.877508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:23.877522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:23.877550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:25.447980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:25.448080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:25.448095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:25.448123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:44:25.448135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:44:25.448145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:44:25.448192 | orchestrator | 2026-04-06 03:44:25.448214 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-06 03:44:25.448226 | orchestrator | Monday 06 April 2026 03:44:24 +0000 (0:00:02.621) 0:02:32.553 ********** 2026-04-06 03:44:25.448236 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:44:25.448246 | orchestrator | 2026-04-06 03:44:25.448256 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-06 03:44:25.448266 | orchestrator | Monday 06 April 2026 03:44:24 +0000 (0:00:00.134) 0:02:32.687 ********** 2026-04-06 03:44:25.448275 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:44:25.448302 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:44:25.448313 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:44:25.448323 | orchestrator | 2026-04-06 03:44:25.448333 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-06 03:44:25.448343 | orchestrator | Monday 06 April 2026 03:44:24 +0000 (0:00:00.360) 0:02:33.048 ********** 2026-04-06 03:44:25.448355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 03:44:25.448367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 03:44:25.448385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 03:44:25.448396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 03:44:25.448413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:44:25.448424 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:44:25.448442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 03:44:30.531526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 03:44:30.531625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 03:44:30.531655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 03:44:30.531664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:44:30.531691 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:44:30.531703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 03:44:30.531711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 03:44:30.531735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 03:44:30.531744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 03:44:30.531804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:44:30.531813 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:44:30.531820 | orchestrator | 2026-04-06 03:44:30.531838 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-06 03:44:30.531846 | orchestrator | Monday 06 April 2026 03:44:25 +0000 (0:00:00.751) 0:02:33.799 ********** 2026-04-06 03:44:30.531854 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:44:30.531861 | orchestrator | 2026-04-06 03:44:30.531867 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-06 03:44:30.531873 | orchestrator | Monday 06 April 2026 03:44:26 +0000 (0:00:00.841) 0:02:34.641 ********** 2026-04-06 03:44:30.531880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:30.531888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:30.531901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:32.197171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:44:32.197326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:44:32.197342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:44:32.197351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:32.197360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:32.197367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:32.197389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:32.197397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:32.197415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:32.197423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:44:32.197430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:44:32.197450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:44:32.197465 | orchestrator | 2026-04-06 03:44:32.197473 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-06 03:44:32.197482 | orchestrator | Monday 06 April 2026 03:44:31 +0000 (0:00:05.206) 0:02:39.847 ********** 2026-04-06 03:44:32.197496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 03:44:32.303325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 03:44:32.303504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 03:44:32.303526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 03:44:32.303544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:44:32.303560 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:44:32.303593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 03:44:32.303608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 03:44:32.303645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 03:44:32.303677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 03:44:32.303691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:44:32.303705 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:44:32.303720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 03:44:32.303734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 03:44:32.303748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 03:44:32.303854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 03:44:33.158816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:44:33.158942 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:44:33.158968 | orchestrator | 2026-04-06 03:44:33.158986 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-06 03:44:33.159005 | orchestrator | Monday 06 April 2026 03:44:32 +0000 (0:00:00.715) 0:02:40.563 ********** 2026-04-06 03:44:33.159023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 03:44:33.159043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 03:44:33.159062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 03:44:33.159075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 03:44:33.159127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:44:33.159139 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:44:33.159159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 03:44:33.159170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 03:44:33.159180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 03:44:33.159191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 03:44:33.159201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:44:33.159218 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:44:33.159235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 03:44:37.636327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 03:44:37.636434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 03:44:37.636465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 03:44:37.636487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 03:44:37.636515 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:44:37.636527 | orchestrator | 2026-04-06 03:44:37.636537 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-06 03:44:37.636548 | orchestrator | Monday 06 April 2026 03:44:33 +0000 (0:00:01.391) 0:02:41.954 ********** 2026-04-06 03:44:37.636558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:37.636594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:37.636605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:37.636615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:44:37.636625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:44:37.636641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:44:37.636651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:37.636666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:54.327196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:54.327293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:54.327302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:54.327324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:44:54.327330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:44:54.327334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:44:54.327357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:44:54.327364 | orchestrator | 2026-04-06 03:44:54.327372 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-06 03:44:54.327380 | orchestrator | Monday 06 April 2026 03:44:38 +0000 (0:00:04.844) 0:02:46.798 ********** 2026-04-06 03:44:54.327387 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-06 03:44:54.327395 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-06 03:44:54.327401 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-06 03:44:54.327406 | orchestrator | 2026-04-06 03:44:54.327413 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-06 03:44:54.327420 | orchestrator | Monday 06 April 2026 03:44:40 +0000 (0:00:01.612) 0:02:48.410 ********** 2026-04-06 03:44:54.327425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:54.327435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:54.327440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:44:54.327452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:45:10.457460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:45:10.457541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:45:10.457549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:45:10.457583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:45:10.457606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:45:10.457621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:45:10.457658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:45:10.457664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:45:10.457671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:45:10.457684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:45:10.457690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:45:10.457698 | orchestrator | 2026-04-06 03:45:10.457710 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-06 03:45:10.457718 | orchestrator | Monday 06 April 2026 03:44:57 +0000 (0:00:17.789) 0:03:06.199 ********** 2026-04-06 03:45:10.457725 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:45:10.457732 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:45:10.457738 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:45:10.457744 | orchestrator | 2026-04-06 03:45:10.457750 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-06 03:45:10.457756 | orchestrator | Monday 06 April 2026 03:44:59 +0000 (0:00:01.841) 0:03:08.041 ********** 2026-04-06 03:45:10.457805 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-06 03:45:10.457815 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-06 03:45:10.457822 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-06 03:45:10.457828 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-06 03:45:10.457836 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-06 03:45:10.457842 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-06 03:45:10.457849 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-06 03:45:10.457856 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-06 03:45:10.457864 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-06 03:45:10.457870 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-06 03:45:10.457874 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-06 03:45:10.457878 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-06 03:45:10.457882 | orchestrator | 2026-04-06 03:45:10.457886 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-06 03:45:10.457890 | orchestrator | Monday 06 April 2026 03:45:04 +0000 (0:00:05.221) 0:03:13.262 ********** 2026-04-06 03:45:10.457899 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-06 03:45:10.457903 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-06 03:45:10.457913 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-06 03:45:19.269902 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-06 03:45:19.270009 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-06 03:45:19.270052 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-06 03:45:19.270058 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-06 03:45:19.270063 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-06 03:45:19.270068 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-06 03:45:19.270073 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-06 03:45:19.270081 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-06 03:45:19.270089 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-06 03:45:19.270097 | orchestrator | 2026-04-06 03:45:19.270105 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-06 03:45:19.270114 | orchestrator | Monday 06 April 2026 03:45:10 +0000 (0:00:05.449) 0:03:18.711 ********** 2026-04-06 03:45:19.270122 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-06 03:45:19.270131 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-06 03:45:19.270140 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-06 03:45:19.270148 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-06 03:45:19.270165 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-06 03:45:19.270174 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-06 03:45:19.270185 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-06 03:45:19.270190 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-06 03:45:19.270195 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-06 03:45:19.270200 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-06 03:45:19.270205 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-06 03:45:19.270209 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-06 03:45:19.270214 | orchestrator | 2026-04-06 03:45:19.270219 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-06 03:45:19.270223 | orchestrator | Monday 06 April 2026 03:45:15 +0000 (0:00:05.544) 0:03:24.256 ********** 2026-04-06 03:45:19.270231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:45:19.270238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:45:19.270283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 03:45:19.270290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:45:19.270298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:45:19.270303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 03:45:19.270308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:45:19.270314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:45:19.270333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 03:45:19.270346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:46:44.544134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:46:44.544293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 03:46:44.544314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:46:44.544330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:46:44.544344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 03:46:44.544397 | orchestrator | 2026-04-06 03:46:44.544413 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-06 03:46:44.544428 | orchestrator | Monday 06 April 2026 03:45:20 +0000 (0:00:04.246) 0:03:28.502 ********** 2026-04-06 03:46:44.544439 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:46:44.544453 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:46:44.544465 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:46:44.544477 | orchestrator | 2026-04-06 03:46:44.544489 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-06 03:46:44.544522 | orchestrator | Monday 06 April 2026 03:45:20 +0000 (0:00:00.590) 0:03:29.093 ********** 2026-04-06 03:46:44.544535 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:46:44.544547 | orchestrator | 2026-04-06 03:46:44.544559 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-06 03:46:44.544571 | orchestrator | Monday 06 April 2026 03:45:23 +0000 (0:00:02.182) 0:03:31.275 ********** 2026-04-06 03:46:44.544584 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:46:44.544597 | orchestrator | 2026-04-06 03:46:44.544609 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-06 03:46:44.544622 | orchestrator | Monday 06 April 2026 03:45:25 +0000 (0:00:02.306) 0:03:33.582 ********** 2026-04-06 03:46:44.544631 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:46:44.544639 | orchestrator | 2026-04-06 03:46:44.544648 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-06 03:46:44.544658 | orchestrator | Monday 06 April 2026 03:45:27 +0000 (0:00:02.380) 0:03:35.963 ********** 2026-04-06 03:46:44.544688 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:46:44.544697 | orchestrator | 2026-04-06 03:46:44.544706 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-06 03:46:44.544714 | orchestrator | Monday 06 April 2026 03:45:29 +0000 (0:00:02.252) 0:03:38.216 ********** 2026-04-06 03:46:44.544723 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:46:44.544731 | orchestrator | 2026-04-06 03:46:44.544739 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-06 03:46:44.544747 | orchestrator | Monday 06 April 2026 03:45:53 +0000 (0:00:23.223) 0:04:01.439 ********** 2026-04-06 03:46:44.544756 | orchestrator | 2026-04-06 03:46:44.544765 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-06 03:46:44.544773 | orchestrator | Monday 06 April 2026 03:45:53 +0000 (0:00:00.074) 0:04:01.514 ********** 2026-04-06 03:46:44.544799 | orchestrator | 2026-04-06 03:46:44.544808 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-06 03:46:44.544817 | orchestrator | Monday 06 April 2026 03:45:53 +0000 (0:00:00.092) 0:04:01.606 ********** 2026-04-06 03:46:44.544825 | orchestrator | 2026-04-06 03:46:44.544833 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-06 03:46:44.544841 | orchestrator | Monday 06 April 2026 03:45:53 +0000 (0:00:00.084) 0:04:01.691 ********** 2026-04-06 03:46:44.544850 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:46:44.544858 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:46:44.544866 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:46:44.544874 | orchestrator | 2026-04-06 03:46:44.544882 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-06 03:46:44.544890 | orchestrator | Monday 06 April 2026 03:46:10 +0000 (0:00:16.854) 0:04:18.546 ********** 2026-04-06 03:46:44.544898 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:46:44.544918 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:46:44.544927 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:46:44.544935 | orchestrator | 2026-04-06 03:46:44.544944 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-06 03:46:44.544952 | orchestrator | Monday 06 April 2026 03:46:16 +0000 (0:00:06.655) 0:04:25.202 ********** 2026-04-06 03:46:44.544960 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:46:44.544969 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:46:44.544977 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:46:44.544985 | orchestrator | 2026-04-06 03:46:44.544994 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-06 03:46:44.545003 | orchestrator | Monday 06 April 2026 03:46:27 +0000 (0:00:10.515) 0:04:35.717 ********** 2026-04-06 03:46:44.545011 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:46:44.545019 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:46:44.545026 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:46:44.545033 | orchestrator | 2026-04-06 03:46:44.545040 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-06 03:46:44.545047 | orchestrator | Monday 06 April 2026 03:46:33 +0000 (0:00:05.886) 0:04:41.604 ********** 2026-04-06 03:46:44.545055 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:46:44.545062 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:46:44.545069 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:46:44.545076 | orchestrator | 2026-04-06 03:46:44.545083 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:46:44.545092 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 03:46:44.545102 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 03:46:44.545109 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 03:46:44.545116 | orchestrator | 2026-04-06 03:46:44.545124 | orchestrator | 2026-04-06 03:46:44.545131 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:46:44.545139 | orchestrator | Monday 06 April 2026 03:46:44 +0000 (0:00:11.168) 0:04:52.772 ********** 2026-04-06 03:46:44.545146 | orchestrator | =============================================================================== 2026-04-06 03:46:44.545153 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.22s 2026-04-06 03:46:44.545161 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.79s 2026-04-06 03:46:44.545168 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.86s 2026-04-06 03:46:44.545175 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.21s 2026-04-06 03:46:44.545182 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.98s 2026-04-06 03:46:44.545190 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.17s 2026-04-06 03:46:44.545197 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.52s 2026-04-06 03:46:44.545210 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.81s 2026-04-06 03:46:44.545217 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.32s 2026-04-06 03:46:44.545224 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.40s 2026-04-06 03:46:44.545231 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.70s 2026-04-06 03:46:44.545239 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.66s 2026-04-06 03:46:44.545246 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.45s 2026-04-06 03:46:44.545253 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.42s 2026-04-06 03:46:44.545274 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.89s 2026-04-06 03:46:44.942310 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.54s 2026-04-06 03:46:44.942417 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.51s 2026-04-06 03:46:44.942431 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.45s 2026-04-06 03:46:44.942442 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.22s 2026-04-06 03:46:44.942452 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.21s 2026-04-06 03:46:47.405543 | orchestrator | 2026-04-06 03:46:47 | INFO  | Task eddabfd3-844f-4b4f-9739-c61e7975acc5 (ceilometer) was prepared for execution. 2026-04-06 03:46:47.405703 | orchestrator | 2026-04-06 03:46:47 | INFO  | It takes a moment until task eddabfd3-844f-4b4f-9739-c61e7975acc5 (ceilometer) has been started and output is visible here. 2026-04-06 03:47:11.987762 | orchestrator | 2026-04-06 03:47:11.987911 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:47:11.987929 | orchestrator | 2026-04-06 03:47:11.987942 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:47:11.987953 | orchestrator | Monday 06 April 2026 03:46:51 +0000 (0:00:00.301) 0:00:00.301 ********** 2026-04-06 03:47:11.987965 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:47:11.987977 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:47:11.987989 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:47:11.988000 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:47:11.988011 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:47:11.988022 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:47:11.988033 | orchestrator | 2026-04-06 03:47:11.988044 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:47:11.988055 | orchestrator | Monday 06 April 2026 03:46:52 +0000 (0:00:00.815) 0:00:01.116 ********** 2026-04-06 03:47:11.988068 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-06 03:47:11.988079 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-06 03:47:11.988090 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-06 03:47:11.988101 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-06 03:47:11.988112 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-06 03:47:11.988123 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-06 03:47:11.988134 | orchestrator | 2026-04-06 03:47:11.988145 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-06 03:47:11.988156 | orchestrator | 2026-04-06 03:47:11.988167 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-06 03:47:11.988178 | orchestrator | Monday 06 April 2026 03:46:53 +0000 (0:00:00.650) 0:00:01.767 ********** 2026-04-06 03:47:11.988191 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:47:11.988204 | orchestrator | 2026-04-06 03:47:11.988215 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-04-06 03:47:11.988226 | orchestrator | Monday 06 April 2026 03:46:54 +0000 (0:00:01.358) 0:00:03.126 ********** 2026-04-06 03:47:11.988237 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:11.988248 | orchestrator | 2026-04-06 03:47:11.988259 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-04-06 03:47:11.988270 | orchestrator | Monday 06 April 2026 03:46:54 +0000 (0:00:00.126) 0:00:03.252 ********** 2026-04-06 03:47:11.988283 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:11.988296 | orchestrator | 2026-04-06 03:47:11.988310 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-04-06 03:47:11.988323 | orchestrator | Monday 06 April 2026 03:46:55 +0000 (0:00:00.141) 0:00:03.394 ********** 2026-04-06 03:47:11.988335 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:47:11.988377 | orchestrator | 2026-04-06 03:47:11.988390 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-04-06 03:47:11.988403 | orchestrator | Monday 06 April 2026 03:46:59 +0000 (0:00:03.941) 0:00:07.336 ********** 2026-04-06 03:47:11.988416 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:47:11.988429 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-04-06 03:47:11.988440 | orchestrator | 2026-04-06 03:47:11.988451 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-04-06 03:47:11.988462 | orchestrator | Monday 06 April 2026 03:47:02 +0000 (0:00:03.929) 0:00:11.265 ********** 2026-04-06 03:47:11.988473 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:47:11.988484 | orchestrator | 2026-04-06 03:47:11.988495 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-04-06 03:47:11.988506 | orchestrator | Monday 06 April 2026 03:47:06 +0000 (0:00:03.226) 0:00:14.492 ********** 2026-04-06 03:47:11.988534 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-04-06 03:47:11.988545 | orchestrator | 2026-04-06 03:47:11.988556 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-04-06 03:47:11.988567 | orchestrator | Monday 06 April 2026 03:47:10 +0000 (0:00:04.126) 0:00:18.618 ********** 2026-04-06 03:47:11.988578 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:11.988589 | orchestrator | 2026-04-06 03:47:11.988614 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-06 03:47:11.988626 | orchestrator | Monday 06 April 2026 03:47:10 +0000 (0:00:00.146) 0:00:18.765 ********** 2026-04-06 03:47:11.988652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:11.988688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:11.988702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:11.988715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:11.988737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:11.988754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:11.988765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:11.988784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:17.022580 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:17.022690 | orchestrator | 2026-04-06 03:47:17.022709 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-06 03:47:17.022723 | orchestrator | Monday 06 April 2026 03:47:11 +0000 (0:00:01.530) 0:00:20.295 ********** 2026-04-06 03:47:17.022759 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:47:17.022772 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 03:47:17.022782 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 03:47:17.022826 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 03:47:17.022834 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 03:47:17.022840 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 03:47:17.022847 | orchestrator | 2026-04-06 03:47:17.022854 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-06 03:47:17.022861 | orchestrator | Monday 06 April 2026 03:47:13 +0000 (0:00:01.671) 0:00:21.966 ********** 2026-04-06 03:47:17.022868 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:47:17.022875 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:47:17.022881 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:47:17.022887 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:47:17.022893 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:47:17.022900 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:47:17.022906 | orchestrator | 2026-04-06 03:47:17.022913 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-06 03:47:17.022920 | orchestrator | Monday 06 April 2026 03:47:14 +0000 (0:00:00.651) 0:00:22.618 ********** 2026-04-06 03:47:17.022926 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:17.022933 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:17.022939 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:17.022945 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:17.022952 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:17.022959 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:17.022965 | orchestrator | 2026-04-06 03:47:17.022972 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-06 03:47:17.022979 | orchestrator | Monday 06 April 2026 03:47:15 +0000 (0:00:00.858) 0:00:23.476 ********** 2026-04-06 03:47:17.022985 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:47:17.022992 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:47:17.022998 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:47:17.023004 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:47:17.023045 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:47:17.023052 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:47:17.023058 | orchestrator | 2026-04-06 03:47:17.023065 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-06 03:47:17.023071 | orchestrator | Monday 06 April 2026 03:47:15 +0000 (0:00:00.699) 0:00:24.175 ********** 2026-04-06 03:47:17.023083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:17.023092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:17.023099 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:17.023123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:17.023142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:17.023150 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:17.023158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:17.023166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:17.023179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:17.023186 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:17.023194 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:17.023202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:17.023214 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:17.023229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:22.389006 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:22.389166 | orchestrator | 2026-04-06 03:47:22.389838 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-06 03:47:22.389905 | orchestrator | Monday 06 April 2026 03:47:17 +0000 (0:00:01.163) 0:00:25.338 ********** 2026-04-06 03:47:22.389914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:22.389923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:22.389929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:22.389948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:22.389953 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:22.389959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:22.389981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:22.389985 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:22.389989 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:22.390009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:22.390047 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:22.390052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:22.390056 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:22.390063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:22.390068 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:22.390072 | orchestrator | 2026-04-06 03:47:22.390079 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-06 03:47:22.390091 | orchestrator | Monday 06 April 2026 03:47:18 +0000 (0:00:01.044) 0:00:26.383 ********** 2026-04-06 03:47:22.390096 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:47:22.390100 | orchestrator | 2026-04-06 03:47:22.390105 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-06 03:47:22.390111 | orchestrator | Monday 06 April 2026 03:47:18 +0000 (0:00:00.743) 0:00:27.126 ********** 2026-04-06 03:47:22.390115 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:47:22.390120 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:47:22.390124 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:47:22.390128 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:47:22.390132 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:47:22.390136 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:47:22.390140 | orchestrator | 2026-04-06 03:47:22.390145 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-06 03:47:22.390149 | orchestrator | Monday 06 April 2026 03:47:19 +0000 (0:00:00.921) 0:00:28.047 ********** 2026-04-06 03:47:22.390153 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:47:22.390157 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:47:22.390161 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:47:22.390165 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:47:22.390169 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:47:22.390173 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:47:22.390177 | orchestrator | 2026-04-06 03:47:22.390181 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-06 03:47:22.390185 | orchestrator | Monday 06 April 2026 03:47:20 +0000 (0:00:01.033) 0:00:29.081 ********** 2026-04-06 03:47:22.390190 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:22.390194 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:22.390198 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:22.390202 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:22.390206 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:22.390210 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:22.390214 | orchestrator | 2026-04-06 03:47:22.390218 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-06 03:47:22.390222 | orchestrator | Monday 06 April 2026 03:47:21 +0000 (0:00:00.928) 0:00:30.010 ********** 2026-04-06 03:47:22.390226 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:22.390230 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:22.390234 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:22.390238 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:22.390243 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:22.390247 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:22.390250 | orchestrator | 2026-04-06 03:47:28.041353 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-06 03:47:28.041451 | orchestrator | Monday 06 April 2026 03:47:22 +0000 (0:00:00.699) 0:00:30.709 ********** 2026-04-06 03:47:28.041462 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:47:28.041470 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 03:47:28.041477 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 03:47:28.041484 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 03:47:28.041491 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 03:47:28.041498 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 03:47:28.041505 | orchestrator | 2026-04-06 03:47:28.041512 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-06 03:47:28.041518 | orchestrator | Monday 06 April 2026 03:47:24 +0000 (0:00:01.719) 0:00:32.429 ********** 2026-04-06 03:47:28.041528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:28.041562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:28.041584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:28.041591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:28.041598 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:28.041605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:28.041611 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:28.041635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:28.041643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:28.041673 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:28.041687 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:28.041691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:28.041695 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:28.041704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:28.041708 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:28.041712 | orchestrator | 2026-04-06 03:47:28.041716 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-06 03:47:28.041720 | orchestrator | Monday 06 April 2026 03:47:25 +0000 (0:00:00.952) 0:00:33.382 ********** 2026-04-06 03:47:28.041723 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:28.041727 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:28.041731 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:28.041735 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:28.041739 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:28.041742 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:28.041746 | orchestrator | 2026-04-06 03:47:28.041750 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-06 03:47:28.041754 | orchestrator | Monday 06 April 2026 03:47:25 +0000 (0:00:00.907) 0:00:34.290 ********** 2026-04-06 03:47:28.041758 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:47:28.041762 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 03:47:28.041766 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 03:47:28.041770 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 03:47:28.041773 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 03:47:28.041777 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 03:47:28.041781 | orchestrator | 2026-04-06 03:47:28.041785 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-06 03:47:28.041789 | orchestrator | Monday 06 April 2026 03:47:27 +0000 (0:00:01.506) 0:00:35.797 ********** 2026-04-06 03:47:28.041817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:34.298415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:34.298535 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:34.298555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:34.298569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:34.298599 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:34.298612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:34.298623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:34.298635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:34.298666 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:34.298677 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:34.298708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:34.298719 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:34.298731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:34.298742 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:34.298753 | orchestrator | 2026-04-06 03:47:34.298764 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-06 03:47:34.298777 | orchestrator | Monday 06 April 2026 03:47:28 +0000 (0:00:01.181) 0:00:36.978 ********** 2026-04-06 03:47:34.298789 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:34.298913 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:34.298926 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:34.298937 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:34.298948 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:34.298959 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:34.298971 | orchestrator | 2026-04-06 03:47:34.298985 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-06 03:47:34.299005 | orchestrator | Monday 06 April 2026 03:47:29 +0000 (0:00:00.837) 0:00:37.816 ********** 2026-04-06 03:47:34.299017 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:34.299028 | orchestrator | 2026-04-06 03:47:34.299040 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-06 03:47:34.299051 | orchestrator | Monday 06 April 2026 03:47:29 +0000 (0:00:00.148) 0:00:37.964 ********** 2026-04-06 03:47:34.299064 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:34.299075 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:34.299087 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:34.299098 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:34.299108 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:34.299119 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:34.299129 | orchestrator | 2026-04-06 03:47:34.299140 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-06 03:47:34.299151 | orchestrator | Monday 06 April 2026 03:47:30 +0000 (0:00:00.662) 0:00:38.627 ********** 2026-04-06 03:47:34.299163 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:47:34.299186 | orchestrator | 2026-04-06 03:47:34.299198 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-06 03:47:34.299209 | orchestrator | Monday 06 April 2026 03:47:31 +0000 (0:00:01.483) 0:00:40.111 ********** 2026-04-06 03:47:34.299222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:34.299249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:34.923110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:34.923210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:34.923238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:34.923247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:34.923273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:34.923283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:34.923307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:34.923315 | orchestrator | 2026-04-06 03:47:34.923324 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-06 03:47:34.923333 | orchestrator | Monday 06 April 2026 03:47:34 +0000 (0:00:02.502) 0:00:42.614 ********** 2026-04-06 03:47:34.923342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:34.923355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:34.923364 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:34.923379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:34.923387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:34.923395 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:34.923402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:34.923416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:37.044328 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:37.044410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:37.044419 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:37.044438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:37.044460 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:37.044465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:37.044470 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:37.044475 | orchestrator | 2026-04-06 03:47:37.044483 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-06 03:47:37.044493 | orchestrator | Monday 06 April 2026 03:47:35 +0000 (0:00:00.992) 0:00:43.606 ********** 2026-04-06 03:47:37.044502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:37.044511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:37.044537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:37.044546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:37.044554 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:37.044582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:37.044588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:37.044594 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:37.044602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:37.044614 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:37.044623 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:37.044630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:37.044637 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:37.044652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:44.761015 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:44.761116 | orchestrator | 2026-04-06 03:47:44.761127 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-06 03:47:44.761137 | orchestrator | Monday 06 April 2026 03:47:37 +0000 (0:00:01.750) 0:00:45.357 ********** 2026-04-06 03:47:44.761176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:44.761188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:44.761196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:44.761205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:44.761214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:44.761239 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:44.761255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:44.761268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:44.761276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:44.761283 | orchestrator | 2026-04-06 03:47:44.761291 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-06 03:47:44.761299 | orchestrator | Monday 06 April 2026 03:47:39 +0000 (0:00:02.672) 0:00:48.030 ********** 2026-04-06 03:47:44.761306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:44.761314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:44.761326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:54.742243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:54.742366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:54.742382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:54.742394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:54.742405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:54.742414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:54.742456 | orchestrator | 2026-04-06 03:47:54.742474 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-06 03:47:54.742513 | orchestrator | Monday 06 April 2026 03:47:44 +0000 (0:00:05.048) 0:00:53.078 ********** 2026-04-06 03:47:54.742530 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:47:54.742547 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 03:47:54.742562 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 03:47:54.742577 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 03:47:54.742592 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 03:47:54.742607 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 03:47:54.742617 | orchestrator | 2026-04-06 03:47:54.742626 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-06 03:47:54.742635 | orchestrator | Monday 06 April 2026 03:47:46 +0000 (0:00:01.639) 0:00:54.717 ********** 2026-04-06 03:47:54.742644 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:54.742653 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:54.742661 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:54.742674 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:54.742688 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:54.742703 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:54.742730 | orchestrator | 2026-04-06 03:47:54.742755 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-06 03:47:54.742771 | orchestrator | Monday 06 April 2026 03:47:47 +0000 (0:00:00.613) 0:00:55.331 ********** 2026-04-06 03:47:54.742787 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:54.742867 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:54.742886 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:54.742901 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:47:54.742915 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:47:54.742929 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:47:54.742944 | orchestrator | 2026-04-06 03:47:54.742958 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-06 03:47:54.742973 | orchestrator | Monday 06 April 2026 03:47:48 +0000 (0:00:01.715) 0:00:57.046 ********** 2026-04-06 03:47:54.743004 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:54.743032 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:54.743047 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:54.743062 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:47:54.743077 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:47:54.743092 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:47:54.743107 | orchestrator | 2026-04-06 03:47:54.743122 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-06 03:47:54.743138 | orchestrator | Monday 06 April 2026 03:47:50 +0000 (0:00:01.480) 0:00:58.526 ********** 2026-04-06 03:47:54.743152 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:47:54.743165 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 03:47:54.743180 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 03:47:54.743194 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 03:47:54.743207 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 03:47:54.743220 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 03:47:54.743234 | orchestrator | 2026-04-06 03:47:54.743249 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-06 03:47:54.743262 | orchestrator | Monday 06 April 2026 03:47:52 +0000 (0:00:01.917) 0:01:00.444 ********** 2026-04-06 03:47:54.743281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:54.743335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:54.743351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:54.743384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:55.846503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:55.846584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:47:55.846608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:55.846615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:55.846621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:47:55.846626 | orchestrator | 2026-04-06 03:47:55.846633 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-06 03:47:55.846639 | orchestrator | Monday 06 April 2026 03:47:54 +0000 (0:00:02.612) 0:01:03.057 ********** 2026-04-06 03:47:55.846644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:55.846667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:55.846674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:55.846684 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:55.846690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:55.846695 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:55.846700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:55.846706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:55.846711 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:55.846717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:55.846722 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:55.846733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:59.794465 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:59.794579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:59.794621 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:59.794633 | orchestrator | 2026-04-06 03:47:59.794644 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-06 03:47:59.794655 | orchestrator | Monday 06 April 2026 03:47:55 +0000 (0:00:01.108) 0:01:04.165 ********** 2026-04-06 03:47:59.794665 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:59.794675 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:59.794684 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:59.794694 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:59.794704 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:59.794713 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:59.794723 | orchestrator | 2026-04-06 03:47:59.794737 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-06 03:47:59.794753 | orchestrator | Monday 06 April 2026 03:47:56 +0000 (0:00:00.923) 0:01:05.089 ********** 2026-04-06 03:47:59.794771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:59.794792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:59.794841 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:47:59.794858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:59.794893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:59.794928 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:47:59.794969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 03:47:59.794990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 03:47:59.795007 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:47:59.795026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:59.795043 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:47:59.795062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:59.795079 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:47:59.795096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-06 03:47:59.795120 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:47:59.795132 | orchestrator | 2026-04-06 03:47:59.795143 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-04-06 03:47:59.795160 | orchestrator | Monday 06 April 2026 03:47:57 +0000 (0:00:01.127) 0:01:06.216 ********** 2026-04-06 03:47:59.795183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:48:32.909206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:48:32.909343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 03:48:32.909367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:48:32.909393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:48:32.909424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:48:32.909496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-06 03:48:32.909546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:48:32.909565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 03:48:32.909582 | orchestrator | 2026-04-06 03:48:32.909602 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-06 03:48:32.909621 | orchestrator | Monday 06 April 2026 03:47:59 +0000 (0:00:01.893) 0:01:08.110 ********** 2026-04-06 03:48:32.909638 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:48:32.909656 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:48:32.909675 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:48:32.909694 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:48:32.909710 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:48:32.909727 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:48:32.909745 | orchestrator | 2026-04-06 03:48:32.909763 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-06 03:48:32.909815 | orchestrator | Monday 06 April 2026 03:48:00 +0000 (0:00:00.657) 0:01:08.768 ********** 2026-04-06 03:48:32.909835 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:48:32.909853 | orchestrator | 2026-04-06 03:48:32.909871 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 03:48:32.909890 | orchestrator | Monday 06 April 2026 03:48:05 +0000 (0:00:04.929) 0:01:13.698 ********** 2026-04-06 03:48:32.909908 | orchestrator | 2026-04-06 03:48:32.909928 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 03:48:32.909947 | orchestrator | Monday 06 April 2026 03:48:05 +0000 (0:00:00.081) 0:01:13.780 ********** 2026-04-06 03:48:32.909966 | orchestrator | 2026-04-06 03:48:32.909979 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 03:48:32.909990 | orchestrator | Monday 06 April 2026 03:48:05 +0000 (0:00:00.074) 0:01:13.854 ********** 2026-04-06 03:48:32.910077 | orchestrator | 2026-04-06 03:48:32.910091 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 03:48:32.910102 | orchestrator | Monday 06 April 2026 03:48:05 +0000 (0:00:00.262) 0:01:14.116 ********** 2026-04-06 03:48:32.910113 | orchestrator | 2026-04-06 03:48:32.910124 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 03:48:32.910136 | orchestrator | Monday 06 April 2026 03:48:05 +0000 (0:00:00.083) 0:01:14.200 ********** 2026-04-06 03:48:32.910147 | orchestrator | 2026-04-06 03:48:32.910158 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 03:48:32.910169 | orchestrator | Monday 06 April 2026 03:48:05 +0000 (0:00:00.098) 0:01:14.299 ********** 2026-04-06 03:48:32.910179 | orchestrator | 2026-04-06 03:48:32.910203 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-06 03:48:32.910213 | orchestrator | Monday 06 April 2026 03:48:06 +0000 (0:00:00.085) 0:01:14.384 ********** 2026-04-06 03:48:32.910225 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:48:32.910236 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:48:32.910245 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:48:32.910255 | orchestrator | 2026-04-06 03:48:32.910265 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-06 03:48:32.910275 | orchestrator | Monday 06 April 2026 03:48:11 +0000 (0:00:05.613) 0:01:19.998 ********** 2026-04-06 03:48:32.910285 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:48:32.910294 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:48:32.910304 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:48:32.910313 | orchestrator | 2026-04-06 03:48:32.910323 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-06 03:48:32.910341 | orchestrator | Monday 06 April 2026 03:48:21 +0000 (0:00:09.976) 0:01:29.975 ********** 2026-04-06 03:48:32.910351 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:48:32.910361 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:48:32.910371 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:48:32.910383 | orchestrator | 2026-04-06 03:48:32.910399 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:48:32.910414 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-06 03:48:32.910429 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 03:48:32.910467 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 03:48:33.463403 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-06 03:48:33.463529 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-06 03:48:33.463559 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-06 03:48:33.463573 | orchestrator | 2026-04-06 03:48:33.463588 | orchestrator | 2026-04-06 03:48:33.463603 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:48:33.463617 | orchestrator | Monday 06 April 2026 03:48:32 +0000 (0:00:11.244) 0:01:41.219 ********** 2026-04-06 03:48:33.463631 | orchestrator | =============================================================================== 2026-04-06 03:48:33.463645 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.24s 2026-04-06 03:48:33.463659 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.98s 2026-04-06 03:48:33.463673 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 5.61s 2026-04-06 03:48:33.463715 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.05s 2026-04-06 03:48:33.463724 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.93s 2026-04-06 03:48:33.463731 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.13s 2026-04-06 03:48:33.463739 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.94s 2026-04-06 03:48:33.463746 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.93s 2026-04-06 03:48:33.463754 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.23s 2026-04-06 03:48:33.463761 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.67s 2026-04-06 03:48:33.463768 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.61s 2026-04-06 03:48:33.463775 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.50s 2026-04-06 03:48:33.463828 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.92s 2026-04-06 03:48:33.463835 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.89s 2026-04-06 03:48:33.463843 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.75s 2026-04-06 03:48:33.463850 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.72s 2026-04-06 03:48:33.463858 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.72s 2026-04-06 03:48:33.463866 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.67s 2026-04-06 03:48:33.463873 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.64s 2026-04-06 03:48:33.463881 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.53s 2026-04-06 03:48:36.172506 | orchestrator | 2026-04-06 03:48:36 | INFO  | Task c6f8c143-3e87-4668-bc79-ad326db700c4 (aodh) was prepared for execution. 2026-04-06 03:48:36.172592 | orchestrator | 2026-04-06 03:48:36 | INFO  | It takes a moment until task c6f8c143-3e87-4668-bc79-ad326db700c4 (aodh) has been started and output is visible here. 2026-04-06 03:49:09.242319 | orchestrator | 2026-04-06 03:49:09.242426 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:49:09.242438 | orchestrator | 2026-04-06 03:49:09.242445 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:49:09.242453 | orchestrator | Monday 06 April 2026 03:48:40 +0000 (0:00:00.295) 0:00:00.295 ********** 2026-04-06 03:49:09.242460 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:49:09.242468 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:49:09.242475 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:49:09.242482 | orchestrator | 2026-04-06 03:49:09.242488 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:49:09.242496 | orchestrator | Monday 06 April 2026 03:48:41 +0000 (0:00:00.407) 0:00:00.702 ********** 2026-04-06 03:49:09.242503 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-06 03:49:09.242510 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-06 03:49:09.242518 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-06 03:49:09.242524 | orchestrator | 2026-04-06 03:49:09.242545 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-06 03:49:09.242552 | orchestrator | 2026-04-06 03:49:09.242559 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-06 03:49:09.242565 | orchestrator | Monday 06 April 2026 03:48:41 +0000 (0:00:00.487) 0:00:01.190 ********** 2026-04-06 03:49:09.242572 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:49:09.242580 | orchestrator | 2026-04-06 03:49:09.242587 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-04-06 03:49:09.242594 | orchestrator | Monday 06 April 2026 03:48:42 +0000 (0:00:00.694) 0:00:01.884 ********** 2026-04-06 03:49:09.242622 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-04-06 03:49:09.242630 | orchestrator | 2026-04-06 03:49:09.242636 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-04-06 03:49:09.242644 | orchestrator | Monday 06 April 2026 03:48:45 +0000 (0:00:03.493) 0:00:05.378 ********** 2026-04-06 03:49:09.242650 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-04-06 03:49:09.242657 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-04-06 03:49:09.242664 | orchestrator | 2026-04-06 03:49:09.242670 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-04-06 03:49:09.242677 | orchestrator | Monday 06 April 2026 03:48:52 +0000 (0:00:06.599) 0:00:11.977 ********** 2026-04-06 03:49:09.242684 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:49:09.242692 | orchestrator | 2026-04-06 03:49:09.242698 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-04-06 03:49:09.242705 | orchestrator | Monday 06 April 2026 03:48:56 +0000 (0:00:03.483) 0:00:15.461 ********** 2026-04-06 03:49:09.242712 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:49:09.242718 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-04-06 03:49:09.242725 | orchestrator | 2026-04-06 03:49:09.242732 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-04-06 03:49:09.242739 | orchestrator | Monday 06 April 2026 03:48:59 +0000 (0:00:03.949) 0:00:19.411 ********** 2026-04-06 03:49:09.242746 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:49:09.242752 | orchestrator | 2026-04-06 03:49:09.242800 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-04-06 03:49:09.242807 | orchestrator | Monday 06 April 2026 03:49:03 +0000 (0:00:03.263) 0:00:22.674 ********** 2026-04-06 03:49:09.242813 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-04-06 03:49:09.242820 | orchestrator | 2026-04-06 03:49:09.242827 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-06 03:49:09.242833 | orchestrator | Monday 06 April 2026 03:49:07 +0000 (0:00:03.935) 0:00:26.609 ********** 2026-04-06 03:49:09.242843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:09.242868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:09.242888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:09.242895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:09.242904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:09.242912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:09.242919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:09.242932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:10.697203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:10.697316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:10.697330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:10.697339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:10.697348 | orchestrator | 2026-04-06 03:49:10.697358 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-06 03:49:10.697368 | orchestrator | Monday 06 April 2026 03:49:09 +0000 (0:00:02.080) 0:00:28.690 ********** 2026-04-06 03:49:10.697379 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:49:10.697393 | orchestrator | 2026-04-06 03:49:10.697414 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-06 03:49:10.697429 | orchestrator | Monday 06 April 2026 03:49:09 +0000 (0:00:00.139) 0:00:28.830 ********** 2026-04-06 03:49:10.697441 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:49:10.697456 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:49:10.697470 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:49:10.697485 | orchestrator | 2026-04-06 03:49:10.697499 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-06 03:49:10.697526 | orchestrator | Monday 06 April 2026 03:49:09 +0000 (0:00:00.588) 0:00:29.418 ********** 2026-04-06 03:49:10.697544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 03:49:10.697590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 03:49:10.697615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:49:10.697631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 03:49:10.697646 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:49:10.697661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 03:49:10.697676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 03:49:10.697689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:49:10.697718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 03:49:15.821670 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:49:15.821843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 03:49:15.821864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 03:49:15.821877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:49:15.821888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 03:49:15.821898 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:49:15.821908 | orchestrator | 2026-04-06 03:49:15.821919 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-06 03:49:15.821930 | orchestrator | Monday 06 April 2026 03:49:10 +0000 (0:00:00.722) 0:00:30.140 ********** 2026-04-06 03:49:15.821940 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:49:15.821971 | orchestrator | 2026-04-06 03:49:15.821981 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-06 03:49:15.821991 | orchestrator | Monday 06 April 2026 03:49:11 +0000 (0:00:00.820) 0:00:30.961 ********** 2026-04-06 03:49:15.822001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:15.822085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:15.822099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:15.822110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:15.822120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:15.822138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:15.822149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:15.822171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:16.529902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:16.530162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:16.530184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:16.530198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:16.530236 | orchestrator | 2026-04-06 03:49:16.530250 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-06 03:49:16.530263 | orchestrator | Monday 06 April 2026 03:49:15 +0000 (0:00:04.305) 0:00:35.267 ********** 2026-04-06 03:49:16.530277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 03:49:16.530289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 03:49:16.530337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:49:16.530353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 03:49:16.530366 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:49:16.530381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 03:49:16.530402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 03:49:16.530415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:49:16.530428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 03:49:16.530442 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:49:16.530480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 03:49:17.696738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 03:49:17.696878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:49:17.696918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 03:49:17.696930 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:49:17.696941 | orchestrator | 2026-04-06 03:49:17.696951 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-06 03:49:17.696961 | orchestrator | Monday 06 April 2026 03:49:16 +0000 (0:00:00.708) 0:00:35.976 ********** 2026-04-06 03:49:17.696971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 03:49:17.696981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 03:49:17.697008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:49:17.697045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 03:49:17.697061 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:49:17.697077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 03:49:17.697105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 03:49:17.697122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:49:17.697138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 03:49:17.697171 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:49:17.697211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-06 03:49:21.937682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 03:49:21.937909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 03:49:21.937937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 03:49:21.937955 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:49:21.937975 | orchestrator | 2026-04-06 03:49:21.937993 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-06 03:49:21.938011 | orchestrator | Monday 06 April 2026 03:49:17 +0000 (0:00:01.164) 0:00:37.140 ********** 2026-04-06 03:49:21.938095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:21.938135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:21.938182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:21.938212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:21.938225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:21.938237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:21.938248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:21.938265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:21.938277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:21.938297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:31.494622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:31.494736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:31.494791 | orchestrator | 2026-04-06 03:49:31.494813 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-06 03:49:31.494833 | orchestrator | Monday 06 April 2026 03:49:21 +0000 (0:00:04.243) 0:00:41.383 ********** 2026-04-06 03:49:31.494852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:31.494881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:31.494893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:31.494941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:31.494953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:31.494963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:31.494974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:31.494984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:31.495001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:31.495019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:31.495037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:36.659023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:36.659137 | orchestrator | 2026-04-06 03:49:36.659152 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-06 03:49:36.659164 | orchestrator | Monday 06 April 2026 03:49:31 +0000 (0:00:09.555) 0:00:50.939 ********** 2026-04-06 03:49:36.659174 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:49:36.659185 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:49:36.659194 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:49:36.659204 | orchestrator | 2026-04-06 03:49:36.659214 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-04-06 03:49:36.659223 | orchestrator | Monday 06 April 2026 03:49:33 +0000 (0:00:01.809) 0:00:52.748 ********** 2026-04-06 03:49:36.659234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:36.659261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:36.659292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-06 03:49:36.659320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:36.659331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:36.659340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 03:49:36.659351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:36.659366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:36.659383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:36.659395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:49:36.659412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:50:34.406597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 03:50:34.406702 | orchestrator | 2026-04-06 03:50:34.406812 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-06 03:50:34.406823 | orchestrator | Monday 06 April 2026 03:49:36 +0000 (0:00:03.355) 0:00:56.103 ********** 2026-04-06 03:50:34.406828 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:50:34.406834 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:50:34.406838 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:50:34.406841 | orchestrator | 2026-04-06 03:50:34.406846 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-04-06 03:50:34.406850 | orchestrator | Monday 06 April 2026 03:49:37 +0000 (0:00:00.357) 0:00:56.461 ********** 2026-04-06 03:50:34.406854 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:50:34.406858 | orchestrator | 2026-04-06 03:50:34.406862 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-04-06 03:50:34.406866 | orchestrator | Monday 06 April 2026 03:49:39 +0000 (0:00:02.210) 0:00:58.671 ********** 2026-04-06 03:50:34.406870 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:50:34.406874 | orchestrator | 2026-04-06 03:50:34.406878 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-06 03:50:34.406902 | orchestrator | Monday 06 April 2026 03:49:41 +0000 (0:00:02.344) 0:01:01.016 ********** 2026-04-06 03:50:34.406907 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:50:34.406910 | orchestrator | 2026-04-06 03:50:34.406914 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-06 03:50:34.406918 | orchestrator | Monday 06 April 2026 03:49:55 +0000 (0:00:13.514) 0:01:14.531 ********** 2026-04-06 03:50:34.406922 | orchestrator | 2026-04-06 03:50:34.406927 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-06 03:50:34.406930 | orchestrator | Monday 06 April 2026 03:49:55 +0000 (0:00:00.082) 0:01:14.614 ********** 2026-04-06 03:50:34.406934 | orchestrator | 2026-04-06 03:50:34.406938 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-06 03:50:34.406942 | orchestrator | Monday 06 April 2026 03:49:55 +0000 (0:00:00.088) 0:01:14.702 ********** 2026-04-06 03:50:34.406946 | orchestrator | 2026-04-06 03:50:34.406950 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-06 03:50:34.406953 | orchestrator | Monday 06 April 2026 03:49:55 +0000 (0:00:00.308) 0:01:15.011 ********** 2026-04-06 03:50:34.406957 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:50:34.406961 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:50:34.406965 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:50:34.406969 | orchestrator | 2026-04-06 03:50:34.406984 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-06 03:50:34.406988 | orchestrator | Monday 06 April 2026 03:50:02 +0000 (0:00:06.558) 0:01:21.569 ********** 2026-04-06 03:50:34.406992 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:50:34.406996 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:50:34.407000 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:50:34.407003 | orchestrator | 2026-04-06 03:50:34.407007 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-06 03:50:34.407011 | orchestrator | Monday 06 April 2026 03:50:12 +0000 (0:00:10.522) 0:01:32.092 ********** 2026-04-06 03:50:34.407015 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:50:34.407019 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:50:34.407022 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:50:34.407026 | orchestrator | 2026-04-06 03:50:34.407030 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-06 03:50:34.407034 | orchestrator | Monday 06 April 2026 03:50:23 +0000 (0:00:10.453) 0:01:42.546 ********** 2026-04-06 03:50:34.407038 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:50:34.407041 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:50:34.407045 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:50:34.407049 | orchestrator | 2026-04-06 03:50:34.407053 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:50:34.407058 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 03:50:34.407063 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 03:50:34.407067 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 03:50:34.407071 | orchestrator | 2026-04-06 03:50:34.407075 | orchestrator | 2026-04-06 03:50:34.407079 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:50:34.407083 | orchestrator | Monday 06 April 2026 03:50:33 +0000 (0:00:10.890) 0:01:53.436 ********** 2026-04-06 03:50:34.407086 | orchestrator | =============================================================================== 2026-04-06 03:50:34.407090 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.51s 2026-04-06 03:50:34.407094 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.89s 2026-04-06 03:50:34.407111 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.52s 2026-04-06 03:50:34.407120 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.45s 2026-04-06 03:50:34.407124 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.56s 2026-04-06 03:50:34.407127 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.60s 2026-04-06 03:50:34.407131 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 6.56s 2026-04-06 03:50:34.407135 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.31s 2026-04-06 03:50:34.407139 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.24s 2026-04-06 03:50:34.407143 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.95s 2026-04-06 03:50:34.407149 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.94s 2026-04-06 03:50:34.407155 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.49s 2026-04-06 03:50:34.407161 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.48s 2026-04-06 03:50:34.407171 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.36s 2026-04-06 03:50:34.407177 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.26s 2026-04-06 03:50:34.407186 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.34s 2026-04-06 03:50:34.407192 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.21s 2026-04-06 03:50:34.407198 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.08s 2026-04-06 03:50:34.407204 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.81s 2026-04-06 03:50:34.407210 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.16s 2026-04-06 03:50:37.088064 | orchestrator | 2026-04-06 03:50:37 | INFO  | Task 97878362-72fe-4cde-b4e6-64bd7ddacd11 (kolla-ceph-rgw) was prepared for execution. 2026-04-06 03:50:37.088145 | orchestrator | 2026-04-06 03:50:37 | INFO  | It takes a moment until task 97878362-72fe-4cde-b4e6-64bd7ddacd11 (kolla-ceph-rgw) has been started and output is visible here. 2026-04-06 03:51:15.544085 | orchestrator | 2026-04-06 03:51:15.545039 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:51:15.545084 | orchestrator | 2026-04-06 03:51:15.545099 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:51:15.545111 | orchestrator | Monday 06 April 2026 03:50:41 +0000 (0:00:00.322) 0:00:00.322 ********** 2026-04-06 03:51:15.545123 | orchestrator | ok: [testbed-manager] 2026-04-06 03:51:15.545135 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:51:15.545147 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:51:15.545158 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:51:15.545169 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:51:15.545180 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:51:15.545191 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:51:15.545202 | orchestrator | 2026-04-06 03:51:15.545213 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:51:15.545243 | orchestrator | Monday 06 April 2026 03:50:42 +0000 (0:00:00.913) 0:00:01.235 ********** 2026-04-06 03:51:15.545256 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-06 03:51:15.545267 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-06 03:51:15.545279 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-06 03:51:15.545289 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-06 03:51:15.545300 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-06 03:51:15.545311 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-06 03:51:15.545322 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-06 03:51:15.545333 | orchestrator | 2026-04-06 03:51:15.545345 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-06 03:51:15.545383 | orchestrator | 2026-04-06 03:51:15.545395 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-06 03:51:15.545406 | orchestrator | Monday 06 April 2026 03:50:43 +0000 (0:00:00.787) 0:00:02.022 ********** 2026-04-06 03:51:15.545417 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:51:15.545430 | orchestrator | 2026-04-06 03:51:15.545441 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-06 03:51:15.545452 | orchestrator | Monday 06 April 2026 03:50:45 +0000 (0:00:01.686) 0:00:03.709 ********** 2026-04-06 03:51:15.545463 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-04-06 03:51:15.545475 | orchestrator | 2026-04-06 03:51:15.545485 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-06 03:51:15.545496 | orchestrator | Monday 06 April 2026 03:50:49 +0000 (0:00:04.026) 0:00:07.736 ********** 2026-04-06 03:51:15.545508 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-06 03:51:15.545521 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-06 03:51:15.545532 | orchestrator | 2026-04-06 03:51:15.545543 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-06 03:51:15.545554 | orchestrator | Monday 06 April 2026 03:50:55 +0000 (0:00:06.810) 0:00:14.546 ********** 2026-04-06 03:51:15.545565 | orchestrator | ok: [testbed-manager] => (item=service) 2026-04-06 03:51:15.545576 | orchestrator | 2026-04-06 03:51:15.545587 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-06 03:51:15.545598 | orchestrator | Monday 06 April 2026 03:50:59 +0000 (0:00:03.418) 0:00:17.965 ********** 2026-04-06 03:51:15.545608 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:51:15.545619 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-04-06 03:51:15.545630 | orchestrator | 2026-04-06 03:51:15.545641 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-06 03:51:15.545651 | orchestrator | Monday 06 April 2026 03:51:03 +0000 (0:00:03.934) 0:00:21.899 ********** 2026-04-06 03:51:15.545662 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-04-06 03:51:15.545673 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-04-06 03:51:15.545684 | orchestrator | 2026-04-06 03:51:15.545762 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-06 03:51:15.545782 | orchestrator | Monday 06 April 2026 03:51:09 +0000 (0:00:06.584) 0:00:28.484 ********** 2026-04-06 03:51:15.545801 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-04-06 03:51:15.545817 | orchestrator | 2026-04-06 03:51:15.545828 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:51:15.545840 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:51:15.545851 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:51:15.545862 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:51:15.545882 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:51:15.545899 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:51:15.545944 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:51:15.545978 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:51:15.545995 | orchestrator | 2026-04-06 03:51:15.546077 | orchestrator | 2026-04-06 03:51:15.546101 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:51:15.546121 | orchestrator | Monday 06 April 2026 03:51:14 +0000 (0:00:05.111) 0:00:33.595 ********** 2026-04-06 03:51:15.546140 | orchestrator | =============================================================================== 2026-04-06 03:51:15.546157 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.81s 2026-04-06 03:51:15.546199 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.58s 2026-04-06 03:51:15.546212 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.11s 2026-04-06 03:51:15.546222 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.03s 2026-04-06 03:51:15.546233 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.93s 2026-04-06 03:51:15.546244 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.42s 2026-04-06 03:51:15.546255 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.69s 2026-04-06 03:51:15.546265 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.91s 2026-04-06 03:51:15.546276 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-04-06 03:51:18.089204 | orchestrator | 2026-04-06 03:51:18 | INFO  | Task 844b29c6-891a-4413-93cf-ef91606e8605 (gnocchi) was prepared for execution. 2026-04-06 03:51:18.089334 | orchestrator | 2026-04-06 03:51:18 | INFO  | It takes a moment until task 844b29c6-891a-4413-93cf-ef91606e8605 (gnocchi) has been started and output is visible here. 2026-04-06 03:51:23.897898 | orchestrator | 2026-04-06 03:51:23.898098 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:51:23.898123 | orchestrator | 2026-04-06 03:51:23.898136 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:51:23.898148 | orchestrator | Monday 06 April 2026 03:51:22 +0000 (0:00:00.284) 0:00:00.284 ********** 2026-04-06 03:51:23.898160 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:51:23.898172 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:51:23.898183 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:51:23.898193 | orchestrator | 2026-04-06 03:51:23.898204 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:51:23.898216 | orchestrator | Monday 06 April 2026 03:51:23 +0000 (0:00:00.348) 0:00:00.633 ********** 2026-04-06 03:51:23.898226 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-06 03:51:23.898238 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-06 03:51:23.898249 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-06 03:51:23.898260 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-06 03:51:23.898271 | orchestrator | 2026-04-06 03:51:23.898282 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-06 03:51:23.898293 | orchestrator | skipping: no hosts matched 2026-04-06 03:51:23.898304 | orchestrator | 2026-04-06 03:51:23.898315 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:51:23.898326 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:51:23.898339 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:51:23.898350 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:51:23.898361 | orchestrator | 2026-04-06 03:51:23.898372 | orchestrator | 2026-04-06 03:51:23.898413 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:51:23.898425 | orchestrator | Monday 06 April 2026 03:51:23 +0000 (0:00:00.415) 0:00:01.049 ********** 2026-04-06 03:51:23.898436 | orchestrator | =============================================================================== 2026-04-06 03:51:23.898447 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-04-06 03:51:23.898458 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-04-06 03:51:26.502497 | orchestrator | 2026-04-06 03:51:26 | INFO  | Task 12f5433a-3e94-423a-8a0d-f5621d3ab377 (manila) was prepared for execution. 2026-04-06 03:51:26.502595 | orchestrator | 2026-04-06 03:51:26 | INFO  | It takes a moment until task 12f5433a-3e94-423a-8a0d-f5621d3ab377 (manila) has been started and output is visible here. 2026-04-06 03:52:09.159578 | orchestrator | 2026-04-06 03:52:09.159728 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:52:09.159742 | orchestrator | 2026-04-06 03:52:09.159749 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:52:09.159756 | orchestrator | Monday 06 April 2026 03:51:31 +0000 (0:00:00.281) 0:00:00.281 ********** 2026-04-06 03:52:09.159763 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:52:09.159771 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:52:09.159778 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:52:09.159785 | orchestrator | 2026-04-06 03:52:09.159792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:52:09.159796 | orchestrator | Monday 06 April 2026 03:51:31 +0000 (0:00:00.365) 0:00:00.646 ********** 2026-04-06 03:52:09.159800 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-06 03:52:09.159805 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-06 03:52:09.159809 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-06 03:52:09.159815 | orchestrator | 2026-04-06 03:52:09.159821 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-06 03:52:09.159831 | orchestrator | 2026-04-06 03:52:09.159838 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-06 03:52:09.159844 | orchestrator | Monday 06 April 2026 03:51:31 +0000 (0:00:00.479) 0:00:01.125 ********** 2026-04-06 03:52:09.159850 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:52:09.159858 | orchestrator | 2026-04-06 03:52:09.159878 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-06 03:52:09.159885 | orchestrator | Monday 06 April 2026 03:51:32 +0000 (0:00:00.587) 0:00:01.713 ********** 2026-04-06 03:52:09.159891 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:52:09.159899 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:52:09.159904 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:52:09.159911 | orchestrator | 2026-04-06 03:52:09.159916 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-04-06 03:52:09.159922 | orchestrator | Monday 06 April 2026 03:51:33 +0000 (0:00:00.546) 0:00:02.260 ********** 2026-04-06 03:52:09.159928 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-04-06 03:52:09.159934 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-04-06 03:52:09.159940 | orchestrator | 2026-04-06 03:52:09.159946 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-04-06 03:52:09.159952 | orchestrator | Monday 06 April 2026 03:51:39 +0000 (0:00:06.657) 0:00:08.917 ********** 2026-04-06 03:52:09.159960 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-04-06 03:52:09.159968 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-04-06 03:52:09.159974 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-04-06 03:52:09.160002 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-04-06 03:52:09.160007 | orchestrator | 2026-04-06 03:52:09.160011 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-04-06 03:52:09.160015 | orchestrator | Monday 06 April 2026 03:51:52 +0000 (0:00:12.941) 0:00:21.859 ********** 2026-04-06 03:52:09.160019 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 03:52:09.160023 | orchestrator | 2026-04-06 03:52:09.160026 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-04-06 03:52:09.160030 | orchestrator | Monday 06 April 2026 03:51:55 +0000 (0:00:03.181) 0:00:25.040 ********** 2026-04-06 03:52:09.160034 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 03:52:09.160039 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-04-06 03:52:09.160043 | orchestrator | 2026-04-06 03:52:09.160046 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-04-06 03:52:09.160050 | orchestrator | Monday 06 April 2026 03:51:59 +0000 (0:00:03.885) 0:00:28.926 ********** 2026-04-06 03:52:09.160054 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 03:52:09.160058 | orchestrator | 2026-04-06 03:52:09.160062 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-04-06 03:52:09.160066 | orchestrator | Monday 06 April 2026 03:52:02 +0000 (0:00:03.256) 0:00:32.182 ********** 2026-04-06 03:52:09.160069 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-04-06 03:52:09.160073 | orchestrator | 2026-04-06 03:52:09.160077 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-06 03:52:09.160081 | orchestrator | Monday 06 April 2026 03:52:06 +0000 (0:00:03.953) 0:00:36.136 ********** 2026-04-06 03:52:09.160101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:09.160110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:09.160124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:09.160136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:09.160142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:09.160147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:09.160157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:20.199757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:20.199885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:20.199921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:20.199932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:20.199941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:20.199951 | orchestrator | 2026-04-06 03:52:20.199990 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-06 03:52:20.200001 | orchestrator | Monday 06 April 2026 03:52:09 +0000 (0:00:02.322) 0:00:38.459 ********** 2026-04-06 03:52:20.200010 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:52:20.200019 | orchestrator | 2026-04-06 03:52:20.200028 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-06 03:52:20.200037 | orchestrator | Monday 06 April 2026 03:52:09 +0000 (0:00:00.623) 0:00:39.083 ********** 2026-04-06 03:52:20.200045 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:52:20.200056 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:52:20.200064 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:52:20.200073 | orchestrator | 2026-04-06 03:52:20.200082 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-06 03:52:20.200091 | orchestrator | Monday 06 April 2026 03:52:10 +0000 (0:00:01.051) 0:00:40.135 ********** 2026-04-06 03:52:20.200101 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 03:52:20.200129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 03:52:20.200139 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 03:52:20.200148 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 03:52:20.200164 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 03:52:20.200179 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 03:52:20.200189 | orchestrator | 2026-04-06 03:52:20.200200 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-06 03:52:20.200210 | orchestrator | Monday 06 April 2026 03:52:12 +0000 (0:00:01.882) 0:00:42.017 ********** 2026-04-06 03:52:20.200221 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 03:52:20.200236 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 03:52:20.200261 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 03:52:20.200276 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 03:52:20.200290 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 03:52:20.200304 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 03:52:20.200317 | orchestrator | 2026-04-06 03:52:20.200331 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-06 03:52:20.200346 | orchestrator | Monday 06 April 2026 03:52:14 +0000 (0:00:01.312) 0:00:43.329 ********** 2026-04-06 03:52:20.200361 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-06 03:52:20.200376 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-06 03:52:20.200390 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-06 03:52:20.200405 | orchestrator | 2026-04-06 03:52:20.200421 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-06 03:52:20.200437 | orchestrator | Monday 06 April 2026 03:52:14 +0000 (0:00:00.683) 0:00:44.012 ********** 2026-04-06 03:52:20.200452 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:52:20.200466 | orchestrator | 2026-04-06 03:52:20.200482 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-06 03:52:20.200497 | orchestrator | Monday 06 April 2026 03:52:14 +0000 (0:00:00.143) 0:00:44.156 ********** 2026-04-06 03:52:20.200512 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:52:20.200526 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:52:20.200538 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:52:20.200552 | orchestrator | 2026-04-06 03:52:20.200566 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-06 03:52:20.200580 | orchestrator | Monday 06 April 2026 03:52:15 +0000 (0:00:00.569) 0:00:44.726 ********** 2026-04-06 03:52:20.200594 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:52:20.200608 | orchestrator | 2026-04-06 03:52:20.200621 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-06 03:52:20.200635 | orchestrator | Monday 06 April 2026 03:52:16 +0000 (0:00:00.622) 0:00:45.348 ********** 2026-04-06 03:52:20.200685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:21.161402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:21.161481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:21.161490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:21.161497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:21.161503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:21.161536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:21.161550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:21.161556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:21.161561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:21.161567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:21.161572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:21.161583 | orchestrator | 2026-04-06 03:52:21.161589 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-06 03:52:21.161596 | orchestrator | Monday 06 April 2026 03:52:20 +0000 (0:00:04.157) 0:00:49.506 ********** 2026-04-06 03:52:21.161606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 03:52:21.897189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:52:21.897293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:21.897309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 03:52:21.897324 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:52:21.897347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 03:52:21.897405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:52:21.897424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:21.897472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 03:52:21.897492 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:52:21.897509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 03:52:21.897529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:52:21.897548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:21.897574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 03:52:21.897585 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:52:21.897596 | orchestrator | 2026-04-06 03:52:21.897606 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-06 03:52:21.897618 | orchestrator | Monday 06 April 2026 03:52:21 +0000 (0:00:00.969) 0:00:50.476 ********** 2026-04-06 03:52:21.897637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 03:52:26.645104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:52:26.645216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:26.645233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 03:52:26.645268 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:52:26.645282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 03:52:26.645294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:52:26.645305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:26.645338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 03:52:26.645349 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:52:26.645360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 03:52:26.645378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:52:26.645389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:26.645399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 03:52:26.645410 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:52:26.645420 | orchestrator | 2026-04-06 03:52:26.645431 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-06 03:52:26.645444 | orchestrator | Monday 06 April 2026 03:52:22 +0000 (0:00:00.972) 0:00:51.449 ********** 2026-04-06 03:52:26.645468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:33.751401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:33.751584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:33.751619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:33.751641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:33.751739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:33.751816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:33.751843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:33.751880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:33.751903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:33.751924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:33.751944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:33.751965 | orchestrator | 2026-04-06 03:52:33.751987 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-06 03:52:33.752009 | orchestrator | Monday 06 April 2026 03:52:26 +0000 (0:00:04.726) 0:00:56.175 ********** 2026-04-06 03:52:33.752049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:38.500084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:38.500212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:52:38.500228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:38.500239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:38.500248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:38.500286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:38.500302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:38.500311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:38.500320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:38.500328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:38.500336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:52:38.500345 | orchestrator | 2026-04-06 03:52:38.500356 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-06 03:52:38.500371 | orchestrator | Monday 06 April 2026 03:52:33 +0000 (0:00:06.890) 0:01:03.066 ********** 2026-04-06 03:52:38.500380 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-06 03:52:38.500389 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-06 03:52:38.500400 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-06 03:52:38.500409 | orchestrator | 2026-04-06 03:52:38.500416 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-06 03:52:38.500424 | orchestrator | Monday 06 April 2026 03:52:37 +0000 (0:00:04.024) 0:01:07.091 ********** 2026-04-06 03:52:38.500447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 03:52:41.952720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:52:41.952839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:41.952859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 03:52:41.952874 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:52:41.952889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 03:52:41.952938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:52:41.952984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:41.953021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 03:52:41.953043 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:52:41.953063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-06 03:52:41.953082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 03:52:41.953101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 03:52:41.953130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 03:52:41.953162 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:52:41.953181 | orchestrator | 2026-04-06 03:52:41.953203 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-04-06 03:52:41.953223 | orchestrator | Monday 06 April 2026 03:52:38 +0000 (0:00:00.709) 0:01:07.800 ********** 2026-04-06 03:52:41.953259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:53:24.205361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:53:24.205463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-06 03:53:24.205475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:53:24.205515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:53:24.205523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 03:53:24.205544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:53:24.205554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:53:24.205561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 03:53:24.205567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:53:24.205584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:53:24.205594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 03:53:24.205601 | orchestrator | 2026-04-06 03:53:24.205609 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-04-06 03:53:24.205617 | orchestrator | Monday 06 April 2026 03:52:42 +0000 (0:00:03.465) 0:01:11.266 ********** 2026-04-06 03:53:24.205624 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:53:24.205631 | orchestrator | 2026-04-06 03:53:24.205713 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-04-06 03:53:24.205722 | orchestrator | Monday 06 April 2026 03:52:44 +0000 (0:00:02.199) 0:01:13.466 ********** 2026-04-06 03:53:24.205729 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:53:24.205740 | orchestrator | 2026-04-06 03:53:24.205749 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-06 03:53:24.205765 | orchestrator | Monday 06 April 2026 03:52:46 +0000 (0:00:02.364) 0:01:15.830 ********** 2026-04-06 03:53:24.205777 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:53:24.205786 | orchestrator | 2026-04-06 03:53:24.205797 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-06 03:53:24.205807 | orchestrator | Monday 06 April 2026 03:53:23 +0000 (0:00:37.276) 0:01:53.107 ********** 2026-04-06 03:53:24.205817 | orchestrator | 2026-04-06 03:53:24.205834 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-06 03:54:19.976376 | orchestrator | Monday 06 April 2026 03:53:24 +0000 (0:00:00.112) 0:01:53.219 ********** 2026-04-06 03:54:19.976540 | orchestrator | 2026-04-06 03:54:19.976586 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-06 03:54:19.976690 | orchestrator | Monday 06 April 2026 03:53:24 +0000 (0:00:00.092) 0:01:53.312 ********** 2026-04-06 03:54:19.976707 | orchestrator | 2026-04-06 03:54:19.976719 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-06 03:54:19.976730 | orchestrator | Monday 06 April 2026 03:53:24 +0000 (0:00:00.083) 0:01:53.396 ********** 2026-04-06 03:54:19.976742 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:54:19.976755 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:54:19.976766 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:54:19.976777 | orchestrator | 2026-04-06 03:54:19.976789 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-06 03:54:19.976800 | orchestrator | Monday 06 April 2026 03:53:39 +0000 (0:00:14.985) 0:02:08.382 ********** 2026-04-06 03:54:19.976811 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:54:19.976822 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:54:19.976834 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:54:19.976845 | orchestrator | 2026-04-06 03:54:19.976856 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-06 03:54:19.976867 | orchestrator | Monday 06 April 2026 03:53:50 +0000 (0:00:11.474) 0:02:19.856 ********** 2026-04-06 03:54:19.976905 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:54:19.976920 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:54:19.976932 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:54:19.976945 | orchestrator | 2026-04-06 03:54:19.976958 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-06 03:54:19.976971 | orchestrator | Monday 06 April 2026 03:54:01 +0000 (0:00:10.507) 0:02:30.364 ********** 2026-04-06 03:54:19.976984 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:54:19.976997 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:54:19.977009 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:54:19.977021 | orchestrator | 2026-04-06 03:54:19.977035 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:54:19.977050 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 03:54:19.977065 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 03:54:19.977078 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 03:54:19.977091 | orchestrator | 2026-04-06 03:54:19.977104 | orchestrator | 2026-04-06 03:54:19.977116 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:54:19.977129 | orchestrator | Monday 06 April 2026 03:54:19 +0000 (0:00:18.290) 0:02:48.654 ********** 2026-04-06 03:54:19.977142 | orchestrator | =============================================================================== 2026-04-06 03:54:19.977155 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 37.28s 2026-04-06 03:54:19.977168 | orchestrator | manila : Restart manila-share container -------------------------------- 18.29s 2026-04-06 03:54:19.977180 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.99s 2026-04-06 03:54:19.977193 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.94s 2026-04-06 03:54:19.977206 | orchestrator | manila : Restart manila-data container --------------------------------- 11.47s 2026-04-06 03:54:19.977219 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.51s 2026-04-06 03:54:19.977257 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.89s 2026-04-06 03:54:19.977269 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.66s 2026-04-06 03:54:19.977280 | orchestrator | manila : Copying over config.json files for services -------------------- 4.73s 2026-04-06 03:54:19.977291 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.16s 2026-04-06 03:54:19.977302 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 4.02s 2026-04-06 03:54:19.977313 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.95s 2026-04-06 03:54:19.977324 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.89s 2026-04-06 03:54:19.977334 | orchestrator | manila : Check manila containers ---------------------------------------- 3.47s 2026-04-06 03:54:19.977345 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.26s 2026-04-06 03:54:19.977356 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.18s 2026-04-06 03:54:19.977367 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.36s 2026-04-06 03:54:19.977378 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.32s 2026-04-06 03:54:19.977389 | orchestrator | manila : Creating Manila database --------------------------------------- 2.20s 2026-04-06 03:54:19.977400 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.88s 2026-04-06 03:54:20.384601 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-04-06 03:54:32.798377 | orchestrator | 2026-04-06 03:54:32 | INFO  | Task 46b61a0b-9909-476f-83aa-1c733c0d54a1 (netdata) was prepared for execution. 2026-04-06 03:54:32.798513 | orchestrator | 2026-04-06 03:54:32 | INFO  | It takes a moment until task 46b61a0b-9909-476f-83aa-1c733c0d54a1 (netdata) has been started and output is visible here. 2026-04-06 03:56:10.573130 | orchestrator | 2026-04-06 03:56:10.573276 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:56:10.573302 | orchestrator | 2026-04-06 03:56:10.573318 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:56:10.573336 | orchestrator | Monday 06 April 2026 03:54:37 +0000 (0:00:00.275) 0:00:00.275 ********** 2026-04-06 03:56:10.573353 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-06 03:56:10.573370 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-06 03:56:10.573387 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-06 03:56:10.573403 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-06 03:56:10.573419 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-06 03:56:10.573435 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-06 03:56:10.573451 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-06 03:56:10.573464 | orchestrator | 2026-04-06 03:56:10.573479 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-06 03:56:10.573494 | orchestrator | 2026-04-06 03:56:10.573510 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-06 03:56:10.573527 | orchestrator | Monday 06 April 2026 03:54:38 +0000 (0:00:01.007) 0:00:01.283 ********** 2026-04-06 03:56:10.573546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:56:10.573566 | orchestrator | 2026-04-06 03:56:10.573583 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-06 03:56:10.573669 | orchestrator | Monday 06 April 2026 03:54:40 +0000 (0:00:01.487) 0:00:02.771 ********** 2026-04-06 03:56:10.573688 | orchestrator | ok: [testbed-manager] 2026-04-06 03:56:10.573707 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:56:10.573724 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:56:10.573742 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:56:10.573758 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:56:10.573773 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:56:10.573790 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:56:10.573806 | orchestrator | 2026-04-06 03:56:10.573824 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-06 03:56:10.573842 | orchestrator | Monday 06 April 2026 03:54:42 +0000 (0:00:02.085) 0:00:04.857 ********** 2026-04-06 03:56:10.573859 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:56:10.573876 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:56:10.573894 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:56:10.573911 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:56:10.573929 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:56:10.573942 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:56:10.573953 | orchestrator | ok: [testbed-manager] 2026-04-06 03:56:10.573964 | orchestrator | 2026-04-06 03:56:10.573976 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-06 03:56:10.573987 | orchestrator | Monday 06 April 2026 03:54:44 +0000 (0:00:02.389) 0:00:07.246 ********** 2026-04-06 03:56:10.573998 | orchestrator | changed: [testbed-manager] 2026-04-06 03:56:10.574007 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:56:10.574076 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:56:10.574086 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:56:10.574096 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:56:10.574106 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:56:10.574116 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:56:10.574189 | orchestrator | 2026-04-06 03:56:10.574201 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-06 03:56:10.574211 | orchestrator | Monday 06 April 2026 03:54:46 +0000 (0:00:01.683) 0:00:08.930 ********** 2026-04-06 03:56:10.574221 | orchestrator | changed: [testbed-manager] 2026-04-06 03:56:10.574230 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:56:10.574255 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:56:10.574265 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:56:10.574275 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:56:10.574284 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:56:10.574294 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:56:10.574303 | orchestrator | 2026-04-06 03:56:10.574313 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-06 03:56:10.574323 | orchestrator | Monday 06 April 2026 03:55:01 +0000 (0:00:15.267) 0:00:24.197 ********** 2026-04-06 03:56:10.574333 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:56:10.574342 | orchestrator | changed: [testbed-manager] 2026-04-06 03:56:10.574352 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:56:10.574361 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:56:10.574371 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:56:10.574380 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:56:10.574390 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:56:10.574399 | orchestrator | 2026-04-06 03:56:10.574409 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-06 03:56:10.574419 | orchestrator | Monday 06 April 2026 03:55:43 +0000 (0:00:41.623) 0:01:05.820 ********** 2026-04-06 03:56:10.574429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:56:10.574441 | orchestrator | 2026-04-06 03:56:10.574451 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-06 03:56:10.574461 | orchestrator | Monday 06 April 2026 03:55:45 +0000 (0:00:01.717) 0:01:07.537 ********** 2026-04-06 03:56:10.574470 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-06 03:56:10.574481 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-06 03:56:10.574491 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-06 03:56:10.574500 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-06 03:56:10.574532 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-06 03:56:10.574543 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-06 03:56:10.574552 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-06 03:56:10.574562 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-06 03:56:10.574572 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-06 03:56:10.574581 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-06 03:56:10.574591 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-06 03:56:10.574635 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-06 03:56:10.574646 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-06 03:56:10.574655 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-06 03:56:10.574665 | orchestrator | 2026-04-06 03:56:10.574675 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-06 03:56:10.574686 | orchestrator | Monday 06 April 2026 03:55:49 +0000 (0:00:03.877) 0:01:11.415 ********** 2026-04-06 03:56:10.574696 | orchestrator | ok: [testbed-manager] 2026-04-06 03:56:10.574706 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:56:10.574716 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:56:10.574725 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:56:10.574735 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:56:10.574745 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:56:10.574766 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:56:10.574775 | orchestrator | 2026-04-06 03:56:10.574785 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-06 03:56:10.574795 | orchestrator | Monday 06 April 2026 03:55:50 +0000 (0:00:01.474) 0:01:12.890 ********** 2026-04-06 03:56:10.574805 | orchestrator | changed: [testbed-manager] 2026-04-06 03:56:10.574815 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:56:10.574825 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:56:10.574834 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:56:10.574844 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:56:10.574854 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:56:10.574864 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:56:10.574873 | orchestrator | 2026-04-06 03:56:10.574883 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-06 03:56:10.574894 | orchestrator | Monday 06 April 2026 03:55:52 +0000 (0:00:01.589) 0:01:14.480 ********** 2026-04-06 03:56:10.574903 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:56:10.574913 | orchestrator | ok: [testbed-manager] 2026-04-06 03:56:10.574923 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:56:10.574932 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:56:10.574942 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:56:10.574956 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:56:10.574971 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:56:10.574987 | orchestrator | 2026-04-06 03:56:10.575003 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-06 03:56:10.575019 | orchestrator | Monday 06 April 2026 03:55:53 +0000 (0:00:01.399) 0:01:15.879 ********** 2026-04-06 03:56:10.575035 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:56:10.575051 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:56:10.575067 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:56:10.575092 | orchestrator | ok: [testbed-manager] 2026-04-06 03:56:10.575109 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:56:10.575124 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:56:10.575139 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:56:10.575153 | orchestrator | 2026-04-06 03:56:10.575168 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-06 03:56:10.575184 | orchestrator | Monday 06 April 2026 03:55:55 +0000 (0:00:01.797) 0:01:17.677 ********** 2026-04-06 03:56:10.575201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-06 03:56:10.575232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:56:10.575250 | orchestrator | 2026-04-06 03:56:10.575267 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-06 03:56:10.575283 | orchestrator | Monday 06 April 2026 03:55:56 +0000 (0:00:01.593) 0:01:19.271 ********** 2026-04-06 03:56:10.575299 | orchestrator | changed: [testbed-manager] 2026-04-06 03:56:10.575312 | orchestrator | 2026-04-06 03:56:10.575322 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-06 03:56:10.575332 | orchestrator | Monday 06 April 2026 03:55:59 +0000 (0:00:02.456) 0:01:21.728 ********** 2026-04-06 03:56:10.575341 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:56:10.575351 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:56:10.575360 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:56:10.575370 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:56:10.575379 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:56:10.575389 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:56:10.575398 | orchestrator | changed: [testbed-manager] 2026-04-06 03:56:10.575408 | orchestrator | 2026-04-06 03:56:10.575417 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:56:10.575428 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:56:10.575450 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:56:10.575460 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:56:10.575470 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:56:10.575492 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:56:11.123712 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:56:11.123816 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 03:56:11.123831 | orchestrator | 2026-04-06 03:56:11.123842 | orchestrator | 2026-04-06 03:56:11.123853 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:56:11.123865 | orchestrator | Monday 06 April 2026 03:56:10 +0000 (0:00:11.138) 0:01:32.866 ********** 2026-04-06 03:56:11.123875 | orchestrator | =============================================================================== 2026-04-06 03:56:11.123885 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.62s 2026-04-06 03:56:11.123895 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.27s 2026-04-06 03:56:11.123904 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.14s 2026-04-06 03:56:11.123914 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.88s 2026-04-06 03:56:11.123924 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.46s 2026-04-06 03:56:11.123933 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.39s 2026-04-06 03:56:11.123943 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.09s 2026-04-06 03:56:11.123953 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.80s 2026-04-06 03:56:11.123962 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.72s 2026-04-06 03:56:11.123972 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.68s 2026-04-06 03:56:11.123982 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.59s 2026-04-06 03:56:11.123991 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.59s 2026-04-06 03:56:11.124002 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.49s 2026-04-06 03:56:11.124018 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.47s 2026-04-06 03:56:11.124033 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.40s 2026-04-06 03:56:11.124048 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.01s 2026-04-06 03:56:13.902081 | orchestrator | 2026-04-06 03:56:13 | INFO  | Task 6f796bd8-0bc8-4731-bf95-8a144e79872b (prometheus) was prepared for execution. 2026-04-06 03:56:13.902230 | orchestrator | 2026-04-06 03:56:13 | INFO  | It takes a moment until task 6f796bd8-0bc8-4731-bf95-8a144e79872b (prometheus) has been started and output is visible here. 2026-04-06 03:56:24.164927 | orchestrator | 2026-04-06 03:56:24.165035 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:56:24.165051 | orchestrator | 2026-04-06 03:56:24.165060 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:56:24.165068 | orchestrator | Monday 06 April 2026 03:56:18 +0000 (0:00:00.327) 0:00:00.327 ********** 2026-04-06 03:56:24.165076 | orchestrator | ok: [testbed-manager] 2026-04-06 03:56:24.165104 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:56:24.165110 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:56:24.165115 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:56:24.165120 | orchestrator | ok: [testbed-node-3] 2026-04-06 03:56:24.165125 | orchestrator | ok: [testbed-node-4] 2026-04-06 03:56:24.165140 | orchestrator | ok: [testbed-node-5] 2026-04-06 03:56:24.165146 | orchestrator | 2026-04-06 03:56:24.165151 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:56:24.165156 | orchestrator | Monday 06 April 2026 03:56:19 +0000 (0:00:00.942) 0:00:01.269 ********** 2026-04-06 03:56:24.165161 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-06 03:56:24.165167 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-06 03:56:24.165171 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-06 03:56:24.165176 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-06 03:56:24.165181 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-06 03:56:24.165186 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-06 03:56:24.165191 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-06 03:56:24.165195 | orchestrator | 2026-04-06 03:56:24.165200 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-06 03:56:24.165205 | orchestrator | 2026-04-06 03:56:24.165210 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-06 03:56:24.165215 | orchestrator | Monday 06 April 2026 03:56:20 +0000 (0:00:01.027) 0:00:02.297 ********** 2026-04-06 03:56:24.165221 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:56:24.165227 | orchestrator | 2026-04-06 03:56:24.165232 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-06 03:56:24.165237 | orchestrator | Monday 06 April 2026 03:56:22 +0000 (0:00:01.563) 0:00:03.861 ********** 2026-04-06 03:56:24.165245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:24.165254 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-06 03:56:24.165260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:24.165272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:24.165294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:24.165300 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:24.165305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:24.165310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:24.165317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:24.165327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:24.165340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:24.165362 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:24.988817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:24.988904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:24.988914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:24.988932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:24.988947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:24.988955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:24.988980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:24.989010 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-06 03:56:24.989020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:56:24.989027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:24.989034 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:24.989041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:24.989053 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:56:24.989060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:24.989073 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:30.405782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:30.405881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:56:30.405894 | orchestrator | 2026-04-06 03:56:30.405903 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-06 03:56:30.405912 | orchestrator | Monday 06 April 2026 03:56:24 +0000 (0:00:02.898) 0:00:06.760 ********** 2026-04-06 03:56:30.405920 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 03:56:30.405939 | orchestrator | 2026-04-06 03:56:30.405947 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-06 03:56:30.405954 | orchestrator | Monday 06 April 2026 03:56:26 +0000 (0:00:01.950) 0:00:08.710 ********** 2026-04-06 03:56:30.405962 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-06 03:56:30.405991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:30.405999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:30.406006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:30.406185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:30.406203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:30.406210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:30.406217 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:30.406224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:30.406242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:30.406251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:30.406258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:30.406278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:32.892811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:32.892924 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:32.892941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:32.892981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:32.892995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:56:32.893009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:56:32.893036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:32.893069 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:56:32.893079 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-06 03:56:32.893096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:32.893104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:32.893111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:32.893118 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:32.893125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:32.893139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:34.567651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:34.567727 | orchestrator | 2026-04-06 03:56:34.567749 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-06 03:56:34.567755 | orchestrator | Monday 06 April 2026 03:56:32 +0000 (0:00:05.953) 0:00:14.664 ********** 2026-04-06 03:56:34.567761 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-06 03:56:34.567767 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:34.567772 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:34.567844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-06 03:56:34.567863 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:34.567868 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:56:34.567873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:34.567882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:34.567886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:34.567890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:34.567894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:34.567898 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:56:34.567902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:34.567909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:34.567917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:34.816424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:34.816516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:34.816528 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:56:34.816537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:34.816545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:34.816551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:34.816584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:34.816590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:34.816635 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:56:34.816657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:34.816664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:34.816670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 03:56:34.816677 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:56:34.816684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:34.816690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:34.816697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 03:56:34.816703 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:56:34.816716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:34.816733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:35.992963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 03:56:35.993065 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:56:35.993079 | orchestrator | 2026-04-06 03:56:35.993106 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-06 03:56:35.993116 | orchestrator | Monday 06 April 2026 03:56:34 +0000 (0:00:01.926) 0:00:16.590 ********** 2026-04-06 03:56:35.993127 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-06 03:56:35.993138 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:35.993148 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:35.993183 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-06 03:56:35.993229 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:35.993240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:35.993249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:35.993257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:35.993266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:35.993274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:35.993290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:35.993305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:35.993319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:37.484246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:37.484410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:37.484437 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:56:37.484454 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:56:37.484467 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:56:37.484497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:37.484566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:37.484586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:37.484675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:37.484694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 03:56:37.484709 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:56:37.484749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:37.484767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:37.484783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 03:56:37.484798 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:56:37.484815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:37.484831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:37.484864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 03:56:37.484881 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:56:37.484898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 03:56:37.484926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 03:56:41.284338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 03:56:41.284454 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:56:41.284470 | orchestrator | 2026-04-06 03:56:41.284476 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-06 03:56:41.284483 | orchestrator | Monday 06 April 2026 03:56:37 +0000 (0:00:02.655) 0:00:19.245 ********** 2026-04-06 03:56:41.284490 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-06 03:56:41.284497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:41.284518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:41.284547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:41.284553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:41.284570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:41.284575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:41.284580 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:56:41.284585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:41.284637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:41.284642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:41.284652 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:41.284657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:41.284669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:44.294165 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:44.294267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:44.294278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:44.294307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:56:44.294316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:56:44.294350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:56:44.294358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:44.294382 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-06 03:56:44.294391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:44.294404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:44.294411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:56:44.294422 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:44.294428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:44.294435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:44.294449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:56:48.399294 | orchestrator | 2026-04-06 03:56:48.399405 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-06 03:56:48.399442 | orchestrator | Monday 06 April 2026 03:56:44 +0000 (0:00:06.807) 0:00:26.052 ********** 2026-04-06 03:56:48.399467 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 03:56:48.399480 | orchestrator | 2026-04-06 03:56:48.399492 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-06 03:56:48.399503 | orchestrator | Monday 06 April 2026 03:56:45 +0000 (0:00:00.986) 0:00:27.039 ********** 2026-04-06 03:56:48.399544 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082956, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399560 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082956, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399572 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082956, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:56:48.399624 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082956, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399637 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082956, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399649 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082956, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399683 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082980, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8809073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399707 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082980, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8809073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399719 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082980, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8809073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399731 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1082956, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399748 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082980, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8809073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399760 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082944, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.874003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399772 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082944, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.874003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:48.399793 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082980, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8809073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523419 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082944, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.874003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523514 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082980, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8809073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523523 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082944, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.874003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523539 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082944, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.874003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523543 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1082980, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8809073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:56:50.523547 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082970, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.879003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523552 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082970, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.879003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523579 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082970, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.879003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523584 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082944, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.874003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523588 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082970, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.879003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523654 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082970, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.879003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523659 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082942, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523663 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082942, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523670 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082942, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:50.523679 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082970, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.879003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336343 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082942, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336428 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082942, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336450 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082958, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8764312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336458 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082958, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8764312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336465 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1082944, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.874003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:56:52.336493 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082958, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8764312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336500 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082958, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8764312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336523 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082968, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8780031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336530 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082942, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336547 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082968, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8780031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336559 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082968, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8780031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336570 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082958, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8764312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336657 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082968, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8780031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336673 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082963, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8766356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:52.336694 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082958, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8764312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125266 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082963, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8766356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125356 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082963, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8766356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125363 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082968, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8780031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125381 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082963, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8766356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125386 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082968, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8780031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125391 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1082970, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.879003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:56:54.125396 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082955, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125412 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082963, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8766356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125420 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082955, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125426 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082955, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125434 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082955, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125439 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082963, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8766356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125443 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082978, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125448 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082978, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:54.125461 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082978, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511739 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082955, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511843 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082978, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511876 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082938, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8679051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511883 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082938, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8679051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511890 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082955, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511897 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1082942, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:56:56.511906 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083001, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8890033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511936 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082938, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8679051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511945 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082978, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511959 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083001, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8890033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511966 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082938, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8679051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511974 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083001, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8890033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511982 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082978, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.511988 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082976, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:56.512004 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082938, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8679051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604687 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083001, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8890033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604799 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082943, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.870003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604819 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082976, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604834 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082976, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604844 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082938, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8679051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604852 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082976, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604865 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082940, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604932 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1082958, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8764312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:56:58.604949 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083001, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8890033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604960 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082943, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.870003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604972 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082943, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.870003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604983 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082943, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.870003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.604996 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083001, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8890033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.605009 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082967, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8779054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:56:58.605046 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082940, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337467 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082940, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337562 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082976, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337572 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082964, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.877237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337579 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1082968, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8780031, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:00.337584 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082940, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337640 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082967, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8779054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337657 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082976, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337676 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082967, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8779054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337682 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082943, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.870003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337687 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082943, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.870003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337692 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082964, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.877237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337698 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082967, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8779054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337708 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082995, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8858438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337714 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:57:00.337724 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082940, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:00.337733 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082940, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369342 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082964, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.877237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369429 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082964, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.877237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369437 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082967, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8779054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369443 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082995, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8858438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369465 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:57:07.369473 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082967, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8779054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369490 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082995, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8858438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369496 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1082963, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8766356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:07.369514 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:57:07.369520 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082995, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8858438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369526 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:57:07.369532 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082964, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.877237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369538 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082964, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.877237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369548 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082995, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8858438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369554 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:57:07.369560 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082995, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8858438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-06 03:57:07.369569 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:57:07.369575 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1082955, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8750029, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:07.369617 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082978, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:35.994787 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082938, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8679051, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:35.994911 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1083001, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8890033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:35.994928 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1082976, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.880223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:35.994968 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1082943, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.870003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:35.994983 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1082940, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8690028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:35.995012 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1082967, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8779054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:35.995026 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1082964, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.877237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:35.995059 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1082995, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8858438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-06 03:57:35.995073 | orchestrator | 2026-04-06 03:57:35.995088 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-06 03:57:35.995102 | orchestrator | Monday 06 April 2026 03:57:14 +0000 (0:00:28.911) 0:00:55.951 ********** 2026-04-06 03:57:35.995115 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 03:57:35.995128 | orchestrator | 2026-04-06 03:57:35.995138 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-06 03:57:35.995148 | orchestrator | Monday 06 April 2026 03:57:14 +0000 (0:00:00.802) 0:00:56.753 ********** 2026-04-06 03:57:35.995171 | orchestrator | [WARNING]: Skipped 2026-04-06 03:57:35.995183 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995196 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-06 03:57:35.995208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995222 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-06 03:57:35.995234 | orchestrator | [WARNING]: Skipped 2026-04-06 03:57:35.995244 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995256 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-06 03:57:35.995266 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995277 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-06 03:57:35.995288 | orchestrator | [WARNING]: Skipped 2026-04-06 03:57:35.995299 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995310 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-06 03:57:35.995321 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995333 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-06 03:57:35.995344 | orchestrator | [WARNING]: Skipped 2026-04-06 03:57:35.995356 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995367 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-06 03:57:35.995379 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995390 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-06 03:57:35.995402 | orchestrator | [WARNING]: Skipped 2026-04-06 03:57:35.995413 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995425 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-06 03:57:35.995437 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995448 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-06 03:57:35.995459 | orchestrator | [WARNING]: Skipped 2026-04-06 03:57:35.995471 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995485 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-06 03:57:35.995500 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995515 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-06 03:57:35.995530 | orchestrator | [WARNING]: Skipped 2026-04-06 03:57:35.995551 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995565 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-06 03:57:35.995578 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 03:57:35.995655 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-06 03:57:35.995668 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 03:57:35.995680 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:57:35.995691 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 03:57:35.995702 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 03:57:35.995713 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 03:57:35.995724 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 03:57:35.995735 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 03:57:35.995747 | orchestrator | 2026-04-06 03:57:35.995758 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-06 03:57:35.995770 | orchestrator | Monday 06 April 2026 03:57:17 +0000 (0:00:02.044) 0:00:58.798 ********** 2026-04-06 03:57:35.995781 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 03:57:35.995808 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:57:35.995819 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 03:57:35.995832 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:57:35.995844 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 03:57:35.995856 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:57:35.995878 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 03:57:54.633165 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:57:54.633276 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 03:57:54.633292 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:57:54.633305 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 03:57:54.633316 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:57:54.633328 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-06 03:57:54.633339 | orchestrator | 2026-04-06 03:57:54.633351 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-06 03:57:54.633363 | orchestrator | Monday 06 April 2026 03:57:35 +0000 (0:00:18.966) 0:01:17.764 ********** 2026-04-06 03:57:54.633374 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 03:57:54.633385 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 03:57:54.633396 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:57:54.633407 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:57:54.633418 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 03:57:54.633429 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:57:54.633441 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 03:57:54.633451 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:57:54.633463 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 03:57:54.633474 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:57:54.633484 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 03:57:54.633495 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:57:54.633506 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-06 03:57:54.633517 | orchestrator | 2026-04-06 03:57:54.633528 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-06 03:57:54.633539 | orchestrator | Monday 06 April 2026 03:57:39 +0000 (0:00:03.028) 0:01:20.792 ********** 2026-04-06 03:57:54.633551 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 03:57:54.633563 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:57:54.633574 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 03:57:54.633663 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:57:54.633676 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 03:57:54.633688 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:57:54.633699 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 03:57:54.633710 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:57:54.633749 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-06 03:57:54.633761 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 03:57:54.633787 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 03:57:54.633798 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:57:54.633809 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:57:54.633820 | orchestrator | 2026-04-06 03:57:54.633831 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-06 03:57:54.633842 | orchestrator | Monday 06 April 2026 03:57:40 +0000 (0:00:01.988) 0:01:22.781 ********** 2026-04-06 03:57:54.633854 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 03:57:54.633865 | orchestrator | 2026-04-06 03:57:54.633876 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-06 03:57:54.633887 | orchestrator | Monday 06 April 2026 03:57:41 +0000 (0:00:00.803) 0:01:23.585 ********** 2026-04-06 03:57:54.633898 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:57:54.633909 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:57:54.633920 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:57:54.633931 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:57:54.633942 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:57:54.633953 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:57:54.633963 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:57:54.633974 | orchestrator | 2026-04-06 03:57:54.633985 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-06 03:57:54.633996 | orchestrator | Monday 06 April 2026 03:57:42 +0000 (0:00:00.824) 0:01:24.410 ********** 2026-04-06 03:57:54.634007 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:57:54.634072 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:57:54.634086 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:57:54.634097 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:57:54.634108 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:57:54.634119 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:57:54.634130 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:57:54.634141 | orchestrator | 2026-04-06 03:57:54.634152 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-06 03:57:54.634184 | orchestrator | Monday 06 April 2026 03:57:45 +0000 (0:00:02.652) 0:01:27.063 ********** 2026-04-06 03:57:54.634196 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 03:57:54.634207 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 03:57:54.634218 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:57:54.634229 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 03:57:54.634239 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 03:57:54.634250 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:57:54.634261 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:57:54.634272 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:57:54.634283 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 03:57:54.634294 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:57:54.634305 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 03:57:54.634316 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:57:54.634327 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 03:57:54.634338 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:57:54.634350 | orchestrator | 2026-04-06 03:57:54.634361 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-06 03:57:54.634382 | orchestrator | Monday 06 April 2026 03:57:47 +0000 (0:00:01.761) 0:01:28.824 ********** 2026-04-06 03:57:54.634394 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 03:57:54.634405 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:57:54.634416 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 03:57:54.634427 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:57:54.634438 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 03:57:54.634449 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:57:54.634460 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 03:57:54.634471 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:57:54.634482 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 03:57:54.634493 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:57:54.634504 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-06 03:57:54.634516 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 03:57:54.634527 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:57:54.634538 | orchestrator | 2026-04-06 03:57:54.634552 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-06 03:57:54.634570 | orchestrator | Monday 06 April 2026 03:57:48 +0000 (0:00:01.575) 0:01:30.400 ********** 2026-04-06 03:57:54.634613 | orchestrator | [WARNING]: Skipped 2026-04-06 03:57:54.634633 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-06 03:57:54.634652 | orchestrator | due to this access issue: 2026-04-06 03:57:54.634672 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-06 03:57:54.634690 | orchestrator | not a directory 2026-04-06 03:57:54.634709 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 03:57:54.634721 | orchestrator | 2026-04-06 03:57:54.634732 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-06 03:57:54.634750 | orchestrator | Monday 06 April 2026 03:57:49 +0000 (0:00:01.222) 0:01:31.622 ********** 2026-04-06 03:57:54.634762 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:57:54.634773 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:57:54.634784 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:57:54.634794 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:57:54.634805 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:57:54.634816 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:57:54.634827 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:57:54.634838 | orchestrator | 2026-04-06 03:57:54.634849 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-06 03:57:54.634860 | orchestrator | Monday 06 April 2026 03:57:50 +0000 (0:00:01.061) 0:01:32.684 ********** 2026-04-06 03:57:54.634871 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:57:54.634882 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:57:54.634893 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:57:54.634904 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:57:54.634914 | orchestrator | skipping: [testbed-node-3] 2026-04-06 03:57:54.634925 | orchestrator | skipping: [testbed-node-4] 2026-04-06 03:57:54.634935 | orchestrator | skipping: [testbed-node-5] 2026-04-06 03:57:54.634946 | orchestrator | 2026-04-06 03:57:54.634957 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-06 03:57:54.634968 | orchestrator | Monday 06 April 2026 03:57:51 +0000 (0:00:01.063) 0:01:33.747 ********** 2026-04-06 03:57:54.634992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:57:56.218423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:57:56.218500 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-06 03:57:56.218506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:57:56.218511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:57:56.218526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:57:56.218532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:57:56.218552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:57:56.218567 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:57:56.218572 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 03:57:56.218576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:57:56.218611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:57:56.218617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:57:56.218626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:57:56.218631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:57:56.218643 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:57:58.437195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:57:58.437337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:57:58.437363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:57:58.437384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:57:58.437405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 03:57:58.437446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:57:58.437531 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-06 03:57:58.437556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:57:58.437576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 03:57:58.437678 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:57:58.437715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:57:58.437748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:57:58.437784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 03:57:58.437804 | orchestrator | 2026-04-06 03:57:58.437826 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-06 03:57:58.437848 | orchestrator | Monday 06 April 2026 03:57:56 +0000 (0:00:04.247) 0:01:37.995 ********** 2026-04-06 03:57:58.437867 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-06 03:57:58.437887 | orchestrator | skipping: [testbed-manager] 2026-04-06 03:57:58.437906 | orchestrator | 2026-04-06 03:57:58.437926 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 03:57:58.437945 | orchestrator | Monday 06 April 2026 03:57:57 +0000 (0:00:01.413) 0:01:39.408 ********** 2026-04-06 03:57:58.437964 | orchestrator | 2026-04-06 03:57:58.437982 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 03:57:58.438001 | orchestrator | Monday 06 April 2026 03:57:57 +0000 (0:00:00.294) 0:01:39.703 ********** 2026-04-06 03:57:58.438098 | orchestrator | 2026-04-06 03:57:58.438123 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 03:57:58.438144 | orchestrator | Monday 06 April 2026 03:57:57 +0000 (0:00:00.074) 0:01:39.777 ********** 2026-04-06 03:57:58.438163 | orchestrator | 2026-04-06 03:57:58.438184 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 03:57:58.438219 | orchestrator | Monday 06 April 2026 03:57:58 +0000 (0:00:00.090) 0:01:39.868 ********** 2026-04-06 03:59:32.683497 | orchestrator | 2026-04-06 03:59:32.683671 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 03:59:32.683684 | orchestrator | Monday 06 April 2026 03:57:58 +0000 (0:00:00.075) 0:01:39.944 ********** 2026-04-06 03:59:32.683688 | orchestrator | 2026-04-06 03:59:32.683693 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 03:59:32.683697 | orchestrator | Monday 06 April 2026 03:57:58 +0000 (0:00:00.070) 0:01:40.014 ********** 2026-04-06 03:59:32.683701 | orchestrator | 2026-04-06 03:59:32.683705 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 03:59:32.683709 | orchestrator | Monday 06 April 2026 03:57:58 +0000 (0:00:00.073) 0:01:40.088 ********** 2026-04-06 03:59:32.683713 | orchestrator | 2026-04-06 03:59:32.683716 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-06 03:59:32.683720 | orchestrator | Monday 06 April 2026 03:57:58 +0000 (0:00:00.104) 0:01:40.192 ********** 2026-04-06 03:59:32.683724 | orchestrator | changed: [testbed-manager] 2026-04-06 03:59:32.683729 | orchestrator | 2026-04-06 03:59:32.683733 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-06 03:59:32.683737 | orchestrator | Monday 06 April 2026 03:58:21 +0000 (0:00:22.869) 0:02:03.061 ********** 2026-04-06 03:59:32.683741 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:59:32.683745 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:59:32.683803 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:59:32.683807 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:59:32.683811 | orchestrator | changed: [testbed-manager] 2026-04-06 03:59:32.683817 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:59:32.683823 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:59:32.683829 | orchestrator | 2026-04-06 03:59:32.683835 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-06 03:59:32.683842 | orchestrator | Monday 06 April 2026 03:58:34 +0000 (0:00:13.102) 0:02:16.163 ********** 2026-04-06 03:59:32.683848 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:59:32.683854 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:59:32.683883 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:59:32.683890 | orchestrator | 2026-04-06 03:59:32.683896 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-06 03:59:32.683903 | orchestrator | Monday 06 April 2026 03:58:40 +0000 (0:00:06.032) 0:02:22.196 ********** 2026-04-06 03:59:32.683909 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:59:32.683913 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:59:32.683917 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:59:32.683921 | orchestrator | 2026-04-06 03:59:32.683925 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-06 03:59:32.683928 | orchestrator | Monday 06 April 2026 03:58:46 +0000 (0:00:05.975) 0:02:28.172 ********** 2026-04-06 03:59:32.683932 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:59:32.683936 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:59:32.683940 | orchestrator | changed: [testbed-manager] 2026-04-06 03:59:32.683943 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:59:32.683947 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:59:32.683951 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:59:32.683955 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:59:32.683958 | orchestrator | 2026-04-06 03:59:32.683962 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-06 03:59:32.683966 | orchestrator | Monday 06 April 2026 03:58:55 +0000 (0:00:09.472) 0:02:37.645 ********** 2026-04-06 03:59:32.683970 | orchestrator | changed: [testbed-manager] 2026-04-06 03:59:32.683974 | orchestrator | 2026-04-06 03:59:32.683978 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-06 03:59:32.683981 | orchestrator | Monday 06 April 2026 03:59:05 +0000 (0:00:09.235) 0:02:46.880 ********** 2026-04-06 03:59:32.683985 | orchestrator | changed: [testbed-node-1] 2026-04-06 03:59:32.684001 | orchestrator | changed: [testbed-node-0] 2026-04-06 03:59:32.684005 | orchestrator | changed: [testbed-node-2] 2026-04-06 03:59:32.684008 | orchestrator | 2026-04-06 03:59:32.684012 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-06 03:59:32.684016 | orchestrator | Monday 06 April 2026 03:59:15 +0000 (0:00:10.801) 0:02:57.681 ********** 2026-04-06 03:59:32.684020 | orchestrator | changed: [testbed-manager] 2026-04-06 03:59:32.684024 | orchestrator | 2026-04-06 03:59:32.684027 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-06 03:59:32.684031 | orchestrator | Monday 06 April 2026 03:59:26 +0000 (0:00:10.795) 0:03:08.476 ********** 2026-04-06 03:59:32.684035 | orchestrator | changed: [testbed-node-3] 2026-04-06 03:59:32.684039 | orchestrator | changed: [testbed-node-4] 2026-04-06 03:59:32.684042 | orchestrator | changed: [testbed-node-5] 2026-04-06 03:59:32.684046 | orchestrator | 2026-04-06 03:59:32.684050 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 03:59:32.684058 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-06 03:59:32.684065 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-06 03:59:32.684072 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-06 03:59:32.684078 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-06 03:59:32.684084 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-06 03:59:32.684108 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-06 03:59:32.684122 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-06 03:59:32.684129 | orchestrator | 2026-04-06 03:59:32.684136 | orchestrator | 2026-04-06 03:59:32.684142 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 03:59:32.684148 | orchestrator | Monday 06 April 2026 03:59:32 +0000 (0:00:05.359) 0:03:13.836 ********** 2026-04-06 03:59:32.684155 | orchestrator | =============================================================================== 2026-04-06 03:59:32.684161 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.91s 2026-04-06 03:59:32.684167 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.87s 2026-04-06 03:59:32.684173 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.97s 2026-04-06 03:59:32.684179 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.10s 2026-04-06 03:59:32.684185 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.80s 2026-04-06 03:59:32.684192 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.80s 2026-04-06 03:59:32.684198 | orchestrator | prometheus : Restart prometheus-cadvisor container ---------------------- 9.47s 2026-04-06 03:59:32.684204 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.24s 2026-04-06 03:59:32.684210 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.81s 2026-04-06 03:59:32.684216 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.03s 2026-04-06 03:59:32.684222 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.98s 2026-04-06 03:59:32.684229 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.95s 2026-04-06 03:59:32.684234 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.36s 2026-04-06 03:59:32.684238 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.25s 2026-04-06 03:59:32.684242 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.03s 2026-04-06 03:59:32.684245 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.90s 2026-04-06 03:59:32.684249 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.66s 2026-04-06 03:59:32.684253 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.65s 2026-04-06 03:59:32.684256 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.05s 2026-04-06 03:59:32.684260 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.99s 2026-04-06 03:59:35.388285 | orchestrator | 2026-04-06 03:59:35 | INFO  | Task 7aa87692-d29e-47d7-8b49-6d2f96fb8339 (grafana) was prepared for execution. 2026-04-06 03:59:35.388367 | orchestrator | 2026-04-06 03:59:35 | INFO  | It takes a moment until task 7aa87692-d29e-47d7-8b49-6d2f96fb8339 (grafana) has been started and output is visible here. 2026-04-06 03:59:46.263795 | orchestrator | 2026-04-06 03:59:46.263926 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 03:59:46.263946 | orchestrator | 2026-04-06 03:59:46.263956 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 03:59:46.263978 | orchestrator | Monday 06 April 2026 03:59:40 +0000 (0:00:00.344) 0:00:00.344 ********** 2026-04-06 03:59:46.263986 | orchestrator | ok: [testbed-node-0] 2026-04-06 03:59:46.263995 | orchestrator | ok: [testbed-node-1] 2026-04-06 03:59:46.264003 | orchestrator | ok: [testbed-node-2] 2026-04-06 03:59:46.264011 | orchestrator | 2026-04-06 03:59:46.264018 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 03:59:46.264026 | orchestrator | Monday 06 April 2026 03:59:40 +0000 (0:00:00.365) 0:00:00.710 ********** 2026-04-06 03:59:46.264033 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-06 03:59:46.264041 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-06 03:59:46.264069 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-06 03:59:46.264077 | orchestrator | 2026-04-06 03:59:46.264084 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-06 03:59:46.264092 | orchestrator | 2026-04-06 03:59:46.264099 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-06 03:59:46.264106 | orchestrator | Monday 06 April 2026 03:59:41 +0000 (0:00:00.525) 0:00:01.235 ********** 2026-04-06 03:59:46.264114 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:59:46.264123 | orchestrator | 2026-04-06 03:59:46.264130 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-06 03:59:46.264137 | orchestrator | Monday 06 April 2026 03:59:41 +0000 (0:00:00.640) 0:00:01.875 ********** 2026-04-06 03:59:46.264147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:46.264159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:46.264167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:46.264174 | orchestrator | 2026-04-06 03:59:46.264182 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-06 03:59:46.264189 | orchestrator | Monday 06 April 2026 03:59:42 +0000 (0:00:00.923) 0:00:02.799 ********** 2026-04-06 03:59:46.264206 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-06 03:59:46.264214 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-06 03:59:46.264222 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 03:59:46.264229 | orchestrator | 2026-04-06 03:59:46.264237 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-06 03:59:46.264244 | orchestrator | Monday 06 April 2026 03:59:43 +0000 (0:00:00.922) 0:00:03.722 ********** 2026-04-06 03:59:46.264252 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 03:59:46.264259 | orchestrator | 2026-04-06 03:59:46.264266 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-06 03:59:46.264280 | orchestrator | Monday 06 April 2026 03:59:44 +0000 (0:00:00.629) 0:00:04.351 ********** 2026-04-06 03:59:46.264309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:46.264319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:46.264328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:46.264336 | orchestrator | 2026-04-06 03:59:46.264345 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-06 03:59:46.264354 | orchestrator | Monday 06 April 2026 03:59:45 +0000 (0:00:01.343) 0:00:05.695 ********** 2026-04-06 03:59:46.264362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-06 03:59:46.264371 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:59:46.264380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-06 03:59:46.264390 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:59:46.264433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-06 03:59:53.430156 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:59:53.430265 | orchestrator | 2026-04-06 03:59:53.430278 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-06 03:59:53.430289 | orchestrator | Monday 06 April 2026 03:59:46 +0000 (0:00:00.678) 0:00:06.374 ********** 2026-04-06 03:59:53.430301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-06 03:59:53.430314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-06 03:59:53.430324 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:59:53.430333 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:59:53.430343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-06 03:59:53.430352 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:59:53.430361 | orchestrator | 2026-04-06 03:59:53.430369 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-06 03:59:53.430378 | orchestrator | Monday 06 April 2026 03:59:46 +0000 (0:00:00.678) 0:00:07.052 ********** 2026-04-06 03:59:53.430387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:53.430418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:53.430458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:53.430469 | orchestrator | 2026-04-06 03:59:53.430479 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-06 03:59:53.430487 | orchestrator | Monday 06 April 2026 03:59:48 +0000 (0:00:01.327) 0:00:08.380 ********** 2026-04-06 03:59:53.430497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:53.430506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:53.430515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 03:59:53.430524 | orchestrator | 2026-04-06 03:59:53.430540 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-06 03:59:53.430549 | orchestrator | Monday 06 April 2026 03:59:49 +0000 (0:00:01.708) 0:00:10.089 ********** 2026-04-06 03:59:53.430558 | orchestrator | skipping: [testbed-node-0] 2026-04-06 03:59:53.430567 | orchestrator | skipping: [testbed-node-1] 2026-04-06 03:59:53.430606 | orchestrator | skipping: [testbed-node-2] 2026-04-06 03:59:53.430615 | orchestrator | 2026-04-06 03:59:53.430624 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-06 03:59:53.430633 | orchestrator | Monday 06 April 2026 03:59:50 +0000 (0:00:00.355) 0:00:10.445 ********** 2026-04-06 03:59:53.430641 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-06 03:59:53.430653 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-06 03:59:53.430663 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-06 03:59:53.430672 | orchestrator | 2026-04-06 03:59:53.430683 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-06 03:59:53.430692 | orchestrator | Monday 06 April 2026 03:59:51 +0000 (0:00:01.276) 0:00:11.721 ********** 2026-04-06 03:59:53.430702 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-06 03:59:53.430713 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-06 03:59:53.430723 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-06 03:59:53.430733 | orchestrator | 2026-04-06 03:59:53.430749 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-06 03:59:53.430766 | orchestrator | Monday 06 April 2026 03:59:53 +0000 (0:00:01.813) 0:00:13.534 ********** 2026-04-06 04:00:00.263761 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 04:00:00.263884 | orchestrator | 2026-04-06 04:00:00.263908 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-06 04:00:00.263927 | orchestrator | Monday 06 April 2026 03:59:54 +0000 (0:00:00.785) 0:00:14.319 ********** 2026-04-06 04:00:00.263945 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-06 04:00:00.263964 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-06 04:00:00.263982 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:00:00.264001 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:00:00.264019 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:00:00.264037 | orchestrator | 2026-04-06 04:00:00.264056 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-06 04:00:00.264073 | orchestrator | Monday 06 April 2026 03:59:54 +0000 (0:00:00.758) 0:00:15.078 ********** 2026-04-06 04:00:00.264091 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:00:00.264109 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:00:00.264126 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:00:00.264144 | orchestrator | 2026-04-06 04:00:00.264162 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-06 04:00:00.264180 | orchestrator | Monday 06 April 2026 03:59:55 +0000 (0:00:00.406) 0:00:15.484 ********** 2026-04-06 04:00:00.264201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1082743, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.809002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1082743, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.809002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1082743, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.809002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1082801, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8260603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1082801, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8260603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1082801, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8260603, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1082753, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8128443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1082753, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8128443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1082753, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8128443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1082803, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8290021, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1082803, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8290021, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:00.264536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1082803, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8290021, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.006974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1082773, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8191369, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1082773, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8191369, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1082773, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8191369, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1082788, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.823002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1082788, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.823002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1082788, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.823002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1082741, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8064783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1082741, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8064783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1082741, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8064783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1082747, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8100019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1082747, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8100019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1082747, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8100019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:04.007257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1082756, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8137112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.989932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1082756, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8137112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1082756, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8137112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1082781, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.821002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1082781, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.821002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1082781, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.821002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1082791, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8250022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1082791, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8250022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1082791, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8250022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1082749, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8122613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1082749, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8122613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1082749, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8122613, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1082787, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8225167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:07.990368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1082787, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8225167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1082787, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8225167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1082775, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8208385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1082775, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8208385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1082775, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8208385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1082770, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8191369, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1082770, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8191369, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1082770, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8191369, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1082764, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8183389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1082764, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8183389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1082764, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8183389, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1082783, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.822002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1082783, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.822002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:12.108651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1082783, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.822002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.230888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1082759, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.815002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.230987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1082759, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.815002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.230994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1082759, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.815002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.231000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1082790, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.823002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.231023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1082790, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.823002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.231042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1082790, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.823002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.231058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1082928, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.866225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.231063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1082928, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.866225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.231068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1082928, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.866225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.231098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1082843, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.844057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.231107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1082843, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.844057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.231128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1082843, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.844057, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:16.231140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1082828, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8346753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1082828, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8346753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1082828, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8346753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1082870, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.846512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1082870, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.846512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1082870, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.846512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1082811, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8305528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1082811, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8305528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1082811, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8305528, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1082901, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8560026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1082901, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8560026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1082901, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8560026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1082871, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.854805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:19.916922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1082871, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.854805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1082871, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.854805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1082905, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8572981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1082905, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8572981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1082905, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8572981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1082923, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8640027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1082923, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8640027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1082923, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8640027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1082899, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8550026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1082899, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8550026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1082899, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8550026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1082867, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8450024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1082867, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8450024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:24.381841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1082867, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8450024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1082837, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8366716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1082837, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8366716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1082837, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8366716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1082861, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8450024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1082861, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8450024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1082861, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8450024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1082833, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8350022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1082833, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8350022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1082833, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8350022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1082869, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8460953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1082869, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8460953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1082869, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8460953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:28.302621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1082918, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8630028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.048922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1082918, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8630028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1082918, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8630028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1082911, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8610027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1082911, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8610027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1082911, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8610027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1082813, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8318253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1082813, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8318253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1082813, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8318253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1082819, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8335407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1082819, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8335407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1082819, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8335407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1082897, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8550026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:00:32.049268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1082897, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8550026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:02:10.671211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1082897, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8550026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:02:10.671340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1082909, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8580027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:02:10.671353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1082909, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8580027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:02:10.671363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1082909, 'dev': 119, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1775440454.8580027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-06 04:02:10.671373 | orchestrator | 2026-04-06 04:02:10.671383 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-06 04:02:10.671392 | orchestrator | Monday 06 April 2026 04:00:33 +0000 (0:00:38.021) 0:00:53.506 ********** 2026-04-06 04:02:10.671400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 04:02:10.671447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 04:02:10.671456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-06 04:02:10.671465 | orchestrator | 2026-04-06 04:02:10.671473 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-06 04:02:10.671481 | orchestrator | Monday 06 April 2026 04:00:34 +0000 (0:00:01.069) 0:00:54.576 ********** 2026-04-06 04:02:10.671489 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:02:10.671499 | orchestrator | 2026-04-06 04:02:10.671507 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-06 04:02:10.671515 | orchestrator | Monday 06 April 2026 04:00:36 +0000 (0:00:02.452) 0:00:57.028 ********** 2026-04-06 04:02:10.671523 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:02:10.671531 | orchestrator | 2026-04-06 04:02:10.671539 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-06 04:02:10.671551 | orchestrator | Monday 06 April 2026 04:00:39 +0000 (0:00:02.370) 0:00:59.399 ********** 2026-04-06 04:02:10.671559 | orchestrator | 2026-04-06 04:02:10.671638 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-06 04:02:10.671649 | orchestrator | Monday 06 April 2026 04:00:39 +0000 (0:00:00.092) 0:00:59.491 ********** 2026-04-06 04:02:10.671657 | orchestrator | 2026-04-06 04:02:10.671665 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-06 04:02:10.671672 | orchestrator | Monday 06 April 2026 04:00:39 +0000 (0:00:00.078) 0:00:59.570 ********** 2026-04-06 04:02:10.671680 | orchestrator | 2026-04-06 04:02:10.671688 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-06 04:02:10.671696 | orchestrator | Monday 06 April 2026 04:00:39 +0000 (0:00:00.074) 0:00:59.644 ********** 2026-04-06 04:02:10.671704 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:02:10.671712 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:02:10.671720 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:02:10.671728 | orchestrator | 2026-04-06 04:02:10.671736 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-06 04:02:10.671744 | orchestrator | Monday 06 April 2026 04:00:41 +0000 (0:00:02.347) 0:01:01.991 ********** 2026-04-06 04:02:10.671754 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:02:10.671764 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:02:10.671774 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-06 04:02:10.671785 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-06 04:02:10.671794 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-04-06 04:02:10.671812 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-04-06 04:02:10.671821 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:02:10.671831 | orchestrator | 2026-04-06 04:02:10.671841 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-06 04:02:10.671850 | orchestrator | Monday 06 April 2026 04:01:33 +0000 (0:00:51.202) 0:01:53.194 ********** 2026-04-06 04:02:10.671860 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:02:10.671870 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:02:10.671878 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:02:10.671886 | orchestrator | 2026-04-06 04:02:10.671893 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-06 04:02:10.671901 | orchestrator | Monday 06 April 2026 04:02:05 +0000 (0:00:32.248) 0:02:25.443 ********** 2026-04-06 04:02:10.671909 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:02:10.671917 | orchestrator | 2026-04-06 04:02:10.671925 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-06 04:02:10.671933 | orchestrator | Monday 06 April 2026 04:02:07 +0000 (0:00:02.192) 0:02:27.635 ********** 2026-04-06 04:02:10.671940 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:02:10.671948 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:02:10.671956 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:02:10.671964 | orchestrator | 2026-04-06 04:02:10.671972 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-06 04:02:10.671980 | orchestrator | Monday 06 April 2026 04:02:07 +0000 (0:00:00.359) 0:02:27.995 ********** 2026-04-06 04:02:10.671989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-06 04:02:10.672007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-06 04:02:11.387703 | orchestrator | 2026-04-06 04:02:11.387776 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-06 04:02:11.387783 | orchestrator | Monday 06 April 2026 04:02:10 +0000 (0:00:02.782) 0:02:30.777 ********** 2026-04-06 04:02:11.387788 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:02:11.387794 | orchestrator | 2026-04-06 04:02:11.387798 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:02:11.387804 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 04:02:11.387809 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 04:02:11.387813 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 04:02:11.387817 | orchestrator | 2026-04-06 04:02:11.387821 | orchestrator | 2026-04-06 04:02:11.387825 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:02:11.387829 | orchestrator | Monday 06 April 2026 04:02:10 +0000 (0:00:00.301) 0:02:31.079 ********** 2026-04-06 04:02:11.387833 | orchestrator | =============================================================================== 2026-04-06 04:02:11.387837 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.20s 2026-04-06 04:02:11.387855 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.02s 2026-04-06 04:02:11.387859 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.25s 2026-04-06 04:02:11.387876 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.78s 2026-04-06 04:02:11.387880 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.45s 2026-04-06 04:02:11.387884 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.37s 2026-04-06 04:02:11.387888 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.35s 2026-04-06 04:02:11.387892 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.19s 2026-04-06 04:02:11.387896 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.81s 2026-04-06 04:02:11.387899 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.71s 2026-04-06 04:02:11.387903 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.34s 2026-04-06 04:02:11.387907 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.33s 2026-04-06 04:02:11.387911 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.28s 2026-04-06 04:02:11.387915 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.07s 2026-04-06 04:02:11.387918 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.92s 2026-04-06 04:02:11.387922 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.92s 2026-04-06 04:02:11.387926 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.79s 2026-04-06 04:02:11.387930 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.76s 2026-04-06 04:02:11.387934 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.68s 2026-04-06 04:02:11.387938 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.68s 2026-04-06 04:02:11.797727 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-04-06 04:02:11.802432 | orchestrator | + set -e 2026-04-06 04:02:11.802509 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 04:02:11.803122 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 04:02:11.803155 | orchestrator | ++ INTERACTIVE=false 2026-04-06 04:02:11.803165 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 04:02:11.803403 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 04:02:11.803492 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 04:02:11.804292 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 04:02:11.804334 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 04:02:11.804344 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 04:02:11.804352 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 04:02:11.804361 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 04:02:11.804860 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 04:02:11.804905 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 04:02:11.804916 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 04:02:11.804925 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 04:02:11.804936 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 04:02:11.804945 | orchestrator | ++ export ARA=false 2026-04-06 04:02:11.804954 | orchestrator | ++ ARA=false 2026-04-06 04:02:11.804963 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 04:02:11.804972 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 04:02:11.804981 | orchestrator | ++ export TEMPEST=false 2026-04-06 04:02:11.804990 | orchestrator | ++ TEMPEST=false 2026-04-06 04:02:11.804998 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 04:02:11.805007 | orchestrator | ++ IS_ZUUL=true 2026-04-06 04:02:11.805016 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 04:02:11.805025 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 04:02:11.805034 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 04:02:11.805043 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 04:02:11.805051 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 04:02:11.805060 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 04:02:11.805069 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 04:02:11.805077 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 04:02:11.805086 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 04:02:11.805095 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 04:02:11.805647 | orchestrator | ++ semver 9.5.0 8.0.0 2026-04-06 04:02:11.866225 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 04:02:11.866333 | orchestrator | + osism apply clusterapi 2026-04-06 04:02:14.198764 | orchestrator | 2026-04-06 04:02:14 | INFO  | Task 23114130-9325-49f2-a0c8-8aa658621aaa (clusterapi) was prepared for execution. 2026-04-06 04:02:14.199732 | orchestrator | 2026-04-06 04:02:14 | INFO  | It takes a moment until task 23114130-9325-49f2-a0c8-8aa658621aaa (clusterapi) has been started and output is visible here. 2026-04-06 04:03:11.935117 | orchestrator | 2026-04-06 04:03:11.935230 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-06 04:03:11.935251 | orchestrator | 2026-04-06 04:03:11.935267 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-06 04:03:11.935284 | orchestrator | Monday 06 April 2026 04:02:19 +0000 (0:00:00.239) 0:00:00.239 ********** 2026-04-06 04:03:11.935294 | orchestrator | included: cert_manager for testbed-manager 2026-04-06 04:03:11.935302 | orchestrator | 2026-04-06 04:03:11.935311 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-06 04:03:11.935319 | orchestrator | Monday 06 April 2026 04:02:19 +0000 (0:00:00.290) 0:00:00.529 ********** 2026-04-06 04:03:11.935327 | orchestrator | changed: [testbed-manager] 2026-04-06 04:03:11.935336 | orchestrator | 2026-04-06 04:03:11.935345 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-06 04:03:11.935353 | orchestrator | Monday 06 April 2026 04:02:25 +0000 (0:00:05.736) 0:00:06.266 ********** 2026-04-06 04:03:11.935361 | orchestrator | changed: [testbed-manager] 2026-04-06 04:03:11.935369 | orchestrator | 2026-04-06 04:03:11.935377 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-06 04:03:11.935385 | orchestrator | 2026-04-06 04:03:11.935393 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-06 04:03:11.935401 | orchestrator | Monday 06 April 2026 04:02:49 +0000 (0:00:24.027) 0:00:30.293 ********** 2026-04-06 04:03:11.935409 | orchestrator | ok: [testbed-manager] 2026-04-06 04:03:11.935417 | orchestrator | 2026-04-06 04:03:11.935425 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-06 04:03:11.935433 | orchestrator | Monday 06 April 2026 04:02:50 +0000 (0:00:01.159) 0:00:31.453 ********** 2026-04-06 04:03:11.935441 | orchestrator | ok: [testbed-manager] 2026-04-06 04:03:11.935449 | orchestrator | 2026-04-06 04:03:11.935473 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-06 04:03:11.935482 | orchestrator | Monday 06 April 2026 04:02:50 +0000 (0:00:00.191) 0:00:31.645 ********** 2026-04-06 04:03:11.935490 | orchestrator | ok: [testbed-manager] 2026-04-06 04:03:11.935498 | orchestrator | 2026-04-06 04:03:11.935506 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-06 04:03:11.935515 | orchestrator | Monday 06 April 2026 04:03:08 +0000 (0:00:18.387) 0:00:50.032 ********** 2026-04-06 04:03:11.935523 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:03:11.935531 | orchestrator | 2026-04-06 04:03:11.935539 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-06 04:03:11.935547 | orchestrator | Monday 06 April 2026 04:03:09 +0000 (0:00:00.178) 0:00:50.211 ********** 2026-04-06 04:03:11.935555 | orchestrator | changed: [testbed-manager] 2026-04-06 04:03:11.935563 | orchestrator | 2026-04-06 04:03:11.935590 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:03:11.935600 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 04:03:11.935610 | orchestrator | 2026-04-06 04:03:11.935618 | orchestrator | 2026-04-06 04:03:11.935626 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:03:11.935634 | orchestrator | Monday 06 April 2026 04:03:11 +0000 (0:00:02.405) 0:00:52.617 ********** 2026-04-06 04:03:11.935642 | orchestrator | =============================================================================== 2026-04-06 04:03:11.935650 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 24.03s 2026-04-06 04:03:11.935658 | orchestrator | Initialize the CAPI management cluster --------------------------------- 18.39s 2026-04-06 04:03:11.935688 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.74s 2026-04-06 04:03:11.935698 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.41s 2026-04-06 04:03:11.935708 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.16s 2026-04-06 04:03:11.935717 | orchestrator | Include cert_manager role ----------------------------------------------- 0.29s 2026-04-06 04:03:11.935726 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.19s 2026-04-06 04:03:11.935736 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.18s 2026-04-06 04:03:12.360255 | orchestrator | + osism apply magnum 2026-04-06 04:03:14.755607 | orchestrator | 2026-04-06 04:03:14 | INFO  | Task be9d2bf9-31b9-4a85-9984-f44fdd684337 (magnum) was prepared for execution. 2026-04-06 04:03:14.755683 | orchestrator | 2026-04-06 04:03:14 | INFO  | It takes a moment until task be9d2bf9-31b9-4a85-9984-f44fdd684337 (magnum) has been started and output is visible here. 2026-04-06 04:03:59.528706 | orchestrator | 2026-04-06 04:03:59.528837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 04:03:59.528856 | orchestrator | 2026-04-06 04:03:59.528875 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 04:03:59.528895 | orchestrator | Monday 06 April 2026 04:03:19 +0000 (0:00:00.288) 0:00:00.288 ********** 2026-04-06 04:03:59.528914 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:03:59.528933 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:03:59.528951 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:03:59.528968 | orchestrator | 2026-04-06 04:03:59.528988 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 04:03:59.529006 | orchestrator | Monday 06 April 2026 04:03:19 +0000 (0:00:00.346) 0:00:00.635 ********** 2026-04-06 04:03:59.529026 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-06 04:03:59.529045 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-06 04:03:59.529063 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-06 04:03:59.529081 | orchestrator | 2026-04-06 04:03:59.529099 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-06 04:03:59.529111 | orchestrator | 2026-04-06 04:03:59.529122 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-06 04:03:59.529133 | orchestrator | Monday 06 April 2026 04:03:20 +0000 (0:00:00.485) 0:00:01.120 ********** 2026-04-06 04:03:59.529144 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:03:59.529156 | orchestrator | 2026-04-06 04:03:59.529167 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-06 04:03:59.529178 | orchestrator | Monday 06 April 2026 04:03:20 +0000 (0:00:00.631) 0:00:01.752 ********** 2026-04-06 04:03:59.529190 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-06 04:03:59.529201 | orchestrator | 2026-04-06 04:03:59.529212 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-06 04:03:59.529223 | orchestrator | Monday 06 April 2026 04:03:24 +0000 (0:00:03.679) 0:00:05.431 ********** 2026-04-06 04:03:59.529234 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-06 04:03:59.529245 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-06 04:03:59.529256 | orchestrator | 2026-04-06 04:03:59.529267 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-06 04:03:59.529278 | orchestrator | Monday 06 April 2026 04:03:31 +0000 (0:00:06.876) 0:00:12.308 ********** 2026-04-06 04:03:59.529289 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 04:03:59.529300 | orchestrator | 2026-04-06 04:03:59.529314 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-06 04:03:59.529373 | orchestrator | Monday 06 April 2026 04:03:35 +0000 (0:00:03.562) 0:00:15.870 ********** 2026-04-06 04:03:59.529414 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 04:03:59.529433 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-06 04:03:59.529450 | orchestrator | 2026-04-06 04:03:59.529466 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-06 04:03:59.529482 | orchestrator | Monday 06 April 2026 04:03:39 +0000 (0:00:04.059) 0:00:19.929 ********** 2026-04-06 04:03:59.529498 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 04:03:59.529516 | orchestrator | 2026-04-06 04:03:59.529532 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-06 04:03:59.529549 | orchestrator | Monday 06 April 2026 04:03:42 +0000 (0:00:03.473) 0:00:23.403 ********** 2026-04-06 04:03:59.529566 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-06 04:03:59.529653 | orchestrator | 2026-04-06 04:03:59.529666 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-06 04:03:59.529677 | orchestrator | Monday 06 April 2026 04:03:46 +0000 (0:00:03.804) 0:00:27.208 ********** 2026-04-06 04:03:59.529689 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:03:59.529700 | orchestrator | 2026-04-06 04:03:59.529711 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-06 04:03:59.529722 | orchestrator | Monday 06 April 2026 04:03:49 +0000 (0:00:03.484) 0:00:30.693 ********** 2026-04-06 04:03:59.529733 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:03:59.529744 | orchestrator | 2026-04-06 04:03:59.529755 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-06 04:03:59.529766 | orchestrator | Monday 06 April 2026 04:03:53 +0000 (0:00:04.073) 0:00:34.767 ********** 2026-04-06 04:03:59.529778 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:03:59.529789 | orchestrator | 2026-04-06 04:03:59.529799 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-06 04:03:59.529811 | orchestrator | Monday 06 April 2026 04:03:57 +0000 (0:00:03.675) 0:00:38.442 ********** 2026-04-06 04:03:59.529850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:03:59.529868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:03:59.529894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:03:59.529914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:03:59.529927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:03:59.529947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:07.405422 | orchestrator | 2026-04-06 04:04:07.405510 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-06 04:04:07.405521 | orchestrator | Monday 06 April 2026 04:03:59 +0000 (0:00:01.872) 0:00:40.315 ********** 2026-04-06 04:04:07.405530 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:04:07.405538 | orchestrator | 2026-04-06 04:04:07.405545 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-06 04:04:07.405553 | orchestrator | Monday 06 April 2026 04:03:59 +0000 (0:00:00.192) 0:00:40.508 ********** 2026-04-06 04:04:07.405560 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:04:07.405567 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:04:07.405623 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:04:07.405631 | orchestrator | 2026-04-06 04:04:07.405637 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-06 04:04:07.405663 | orchestrator | Monday 06 April 2026 04:04:00 +0000 (0:00:00.322) 0:00:40.830 ********** 2026-04-06 04:04:07.405670 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 04:04:07.405677 | orchestrator | 2026-04-06 04:04:07.405684 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-06 04:04:07.405690 | orchestrator | Monday 06 April 2026 04:04:01 +0000 (0:00:00.985) 0:00:41.816 ********** 2026-04-06 04:04:07.405699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:07.405722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:07.405729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:07.405750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:07.405765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:07.405773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:07.405780 | orchestrator | 2026-04-06 04:04:07.405787 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-06 04:04:07.405798 | orchestrator | Monday 06 April 2026 04:04:03 +0000 (0:00:02.399) 0:00:44.215 ********** 2026-04-06 04:04:07.405805 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:04:07.405823 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:04:07.405830 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:04:07.405837 | orchestrator | 2026-04-06 04:04:07.405843 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-06 04:04:07.405850 | orchestrator | Monday 06 April 2026 04:04:03 +0000 (0:00:00.550) 0:00:44.765 ********** 2026-04-06 04:04:07.405857 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:04:07.405864 | orchestrator | 2026-04-06 04:04:07.405871 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-06 04:04:07.405881 | orchestrator | Monday 06 April 2026 04:04:04 +0000 (0:00:00.677) 0:00:45.443 ********** 2026-04-06 04:04:07.405893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:07.405913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:08.471712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:08.471826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:08.471863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:08.471877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:08.471890 | orchestrator | 2026-04-06 04:04:08.471903 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-06 04:04:08.471916 | orchestrator | Monday 06 April 2026 04:04:07 +0000 (0:00:02.765) 0:00:48.209 ********** 2026-04-06 04:04:08.471948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 04:04:08.471987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:04:08.471999 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:04:08.472018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 04:04:08.472030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:04:08.472042 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:04:08.472054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 04:04:08.472085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:04:12.201844 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:04:12.201938 | orchestrator | 2026-04-06 04:04:12.201951 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-06 04:04:12.201961 | orchestrator | Monday 06 April 2026 04:04:08 +0000 (0:00:01.056) 0:00:49.265 ********** 2026-04-06 04:04:12.201971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 04:04:12.201998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:04:12.202008 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:04:12.202077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 04:04:12.202108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:04:12.202120 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:04:12.202154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 04:04:12.202167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:04:12.202179 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:04:12.202190 | orchestrator | 2026-04-06 04:04:12.202203 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-06 04:04:12.202215 | orchestrator | Monday 06 April 2026 04:04:09 +0000 (0:00:01.022) 0:00:50.287 ********** 2026-04-06 04:04:12.202237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:12.202250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:12.202281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:18.905792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:18.905930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:18.905949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:18.905963 | orchestrator | 2026-04-06 04:04:18.905999 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-06 04:04:18.906013 | orchestrator | Monday 06 April 2026 04:04:12 +0000 (0:00:02.714) 0:00:53.002 ********** 2026-04-06 04:04:18.906090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:18.906155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:18.906170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:18.906189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:18.906201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:18.906221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:18.906233 | orchestrator | 2026-04-06 04:04:18.906244 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-06 04:04:18.906255 | orchestrator | Monday 06 April 2026 04:04:18 +0000 (0:00:05.995) 0:00:58.998 ********** 2026-04-06 04:04:18.906277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 04:04:20.865945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:04:20.866064 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:04:20.866090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 04:04:20.866139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:04:20.866147 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:04:20.866170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-06 04:04:20.866209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:04:20.866223 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:04:20.866247 | orchestrator | 2026-04-06 04:04:20.866258 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-06 04:04:20.866269 | orchestrator | Monday 06 April 2026 04:04:18 +0000 (0:00:00.711) 0:00:59.709 ********** 2026-04-06 04:04:20.866286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:20.866299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:20.866318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-06 04:04:20.866328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:04:20.866348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:05:21.187736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 04:05:21.187868 | orchestrator | 2026-04-06 04:05:21.187886 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-06 04:05:21.187899 | orchestrator | Monday 06 April 2026 04:04:20 +0000 (0:00:01.955) 0:01:01.664 ********** 2026-04-06 04:05:21.187909 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:05:21.187920 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:05:21.187930 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:05:21.187940 | orchestrator | 2026-04-06 04:05:21.187951 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-06 04:05:21.187961 | orchestrator | Monday 06 April 2026 04:04:21 +0000 (0:00:00.583) 0:01:02.248 ********** 2026-04-06 04:05:21.187971 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:05:21.187981 | orchestrator | 2026-04-06 04:05:21.187991 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-06 04:05:21.188001 | orchestrator | Monday 06 April 2026 04:04:23 +0000 (0:00:02.254) 0:01:04.502 ********** 2026-04-06 04:05:21.188010 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:05:21.188020 | orchestrator | 2026-04-06 04:05:21.188030 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-06 04:05:21.188043 | orchestrator | Monday 06 April 2026 04:04:26 +0000 (0:00:02.449) 0:01:06.951 ********** 2026-04-06 04:05:21.188060 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:05:21.188077 | orchestrator | 2026-04-06 04:05:21.188094 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-06 04:05:21.188110 | orchestrator | Monday 06 April 2026 04:04:43 +0000 (0:00:17.016) 0:01:23.968 ********** 2026-04-06 04:05:21.188126 | orchestrator | 2026-04-06 04:05:21.188143 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-06 04:05:21.188160 | orchestrator | Monday 06 April 2026 04:04:43 +0000 (0:00:00.086) 0:01:24.055 ********** 2026-04-06 04:05:21.188179 | orchestrator | 2026-04-06 04:05:21.188197 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-06 04:05:21.188215 | orchestrator | Monday 06 April 2026 04:04:43 +0000 (0:00:00.086) 0:01:24.142 ********** 2026-04-06 04:05:21.188231 | orchestrator | 2026-04-06 04:05:21.188246 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-06 04:05:21.188258 | orchestrator | Monday 06 April 2026 04:04:43 +0000 (0:00:00.079) 0:01:24.222 ********** 2026-04-06 04:05:21.188269 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:05:21.188281 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:05:21.188293 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:05:21.188304 | orchestrator | 2026-04-06 04:05:21.188316 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-06 04:05:21.188327 | orchestrator | Monday 06 April 2026 04:05:03 +0000 (0:00:20.551) 0:01:44.773 ********** 2026-04-06 04:05:21.188338 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:05:21.188349 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:05:21.188360 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:05:21.188371 | orchestrator | 2026-04-06 04:05:21.188383 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:05:21.188396 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 04:05:21.188409 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 04:05:21.188421 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 04:05:21.188432 | orchestrator | 2026-04-06 04:05:21.188443 | orchestrator | 2026-04-06 04:05:21.188455 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:05:21.188466 | orchestrator | Monday 06 April 2026 04:05:20 +0000 (0:00:16.796) 0:02:01.569 ********** 2026-04-06 04:05:21.188487 | orchestrator | =============================================================================== 2026-04-06 04:05:21.188499 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.55s 2026-04-06 04:05:21.188511 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.02s 2026-04-06 04:05:21.188521 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.80s 2026-04-06 04:05:21.188533 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.88s 2026-04-06 04:05:21.188545 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.00s 2026-04-06 04:05:21.188556 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.07s 2026-04-06 04:05:21.188568 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.06s 2026-04-06 04:05:21.188626 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.80s 2026-04-06 04:05:21.188637 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.68s 2026-04-06 04:05:21.188648 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.68s 2026-04-06 04:05:21.188659 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.56s 2026-04-06 04:05:21.188670 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.48s 2026-04-06 04:05:21.188681 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.47s 2026-04-06 04:05:21.188692 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.77s 2026-04-06 04:05:21.188703 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.71s 2026-04-06 04:05:21.188714 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.45s 2026-04-06 04:05:21.188733 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.40s 2026-04-06 04:05:21.188745 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.25s 2026-04-06 04:05:21.188756 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.96s 2026-04-06 04:05:21.188774 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.87s 2026-04-06 04:05:22.116775 | orchestrator | ok: Runtime: 1:46:40.492255 2026-04-06 04:05:22.399417 | 2026-04-06 04:05:22.399600 | TASK [Deploy in a nutshell] 2026-04-06 04:05:22.932721 | orchestrator | skipping: Conditional result was False 2026-04-06 04:05:22.958192 | 2026-04-06 04:05:22.958356 | TASK [Bootstrap services] 2026-04-06 04:05:23.679851 | orchestrator | 2026-04-06 04:05:23.679995 | orchestrator | # BOOTSTRAP 2026-04-06 04:05:23.680008 | orchestrator | 2026-04-06 04:05:23.680017 | orchestrator | + set -e 2026-04-06 04:05:23.680024 | orchestrator | + echo 2026-04-06 04:05:23.680033 | orchestrator | + echo '# BOOTSTRAP' 2026-04-06 04:05:23.680044 | orchestrator | + echo 2026-04-06 04:05:23.680073 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-06 04:05:23.687123 | orchestrator | + set -e 2026-04-06 04:05:23.687246 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-06 04:05:26.068030 | orchestrator | 2026-04-06 04:05:26 | INFO  | It takes a moment until task 67df7f32-e286-4dfc-95f7-1b6b25df7a97 (flavor-manager) has been started and output is visible here. 2026-04-06 04:05:34.513035 | orchestrator | 2026-04-06 04:05:29 | INFO  | Flavor SCS-1L-1 created 2026-04-06 04:05:34.513221 | orchestrator | 2026-04-06 04:05:29 | INFO  | Flavor SCS-1L-1-5 created 2026-04-06 04:05:34.513245 | orchestrator | 2026-04-06 04:05:30 | INFO  | Flavor SCS-1V-2 created 2026-04-06 04:05:34.513260 | orchestrator | 2026-04-06 04:05:30 | INFO  | Flavor SCS-1V-2-5 created 2026-04-06 04:05:34.513274 | orchestrator | 2026-04-06 04:05:30 | INFO  | Flavor SCS-1V-4 created 2026-04-06 04:05:34.513288 | orchestrator | 2026-04-06 04:05:30 | INFO  | Flavor SCS-1V-4-10 created 2026-04-06 04:05:34.513302 | orchestrator | 2026-04-06 04:05:30 | INFO  | Flavor SCS-1V-8 created 2026-04-06 04:05:34.513316 | orchestrator | 2026-04-06 04:05:30 | INFO  | Flavor SCS-1V-8-20 created 2026-04-06 04:05:34.513344 | orchestrator | 2026-04-06 04:05:30 | INFO  | Flavor SCS-2V-4 created 2026-04-06 04:05:34.513359 | orchestrator | 2026-04-06 04:05:31 | INFO  | Flavor SCS-2V-4-10 created 2026-04-06 04:05:34.513373 | orchestrator | 2026-04-06 04:05:31 | INFO  | Flavor SCS-2V-8 created 2026-04-06 04:05:34.513386 | orchestrator | 2026-04-06 04:05:31 | INFO  | Flavor SCS-2V-8-20 created 2026-04-06 04:05:34.513401 | orchestrator | 2026-04-06 04:05:31 | INFO  | Flavor SCS-2V-16 created 2026-04-06 04:05:34.513414 | orchestrator | 2026-04-06 04:05:31 | INFO  | Flavor SCS-2V-16-50 created 2026-04-06 04:05:34.513428 | orchestrator | 2026-04-06 04:05:31 | INFO  | Flavor SCS-4V-8 created 2026-04-06 04:05:34.513441 | orchestrator | 2026-04-06 04:05:32 | INFO  | Flavor SCS-4V-8-20 created 2026-04-06 04:05:34.513455 | orchestrator | 2026-04-06 04:05:32 | INFO  | Flavor SCS-4V-16 created 2026-04-06 04:05:34.513469 | orchestrator | 2026-04-06 04:05:32 | INFO  | Flavor SCS-4V-16-50 created 2026-04-06 04:05:34.513483 | orchestrator | 2026-04-06 04:05:32 | INFO  | Flavor SCS-4V-32 created 2026-04-06 04:05:34.513496 | orchestrator | 2026-04-06 04:05:32 | INFO  | Flavor SCS-4V-32-100 created 2026-04-06 04:05:34.513510 | orchestrator | 2026-04-06 04:05:32 | INFO  | Flavor SCS-8V-16 created 2026-04-06 04:05:34.513524 | orchestrator | 2026-04-06 04:05:33 | INFO  | Flavor SCS-8V-16-50 created 2026-04-06 04:05:34.513539 | orchestrator | 2026-04-06 04:05:33 | INFO  | Flavor SCS-8V-32 created 2026-04-06 04:05:34.513552 | orchestrator | 2026-04-06 04:05:33 | INFO  | Flavor SCS-8V-32-100 created 2026-04-06 04:05:34.513566 | orchestrator | 2026-04-06 04:05:33 | INFO  | Flavor SCS-16V-32 created 2026-04-06 04:05:34.513606 | orchestrator | 2026-04-06 04:05:33 | INFO  | Flavor SCS-16V-32-100 created 2026-04-06 04:05:34.513621 | orchestrator | 2026-04-06 04:05:33 | INFO  | Flavor SCS-2V-4-20s created 2026-04-06 04:05:34.513652 | orchestrator | 2026-04-06 04:05:34 | INFO  | Flavor SCS-4V-8-50s created 2026-04-06 04:05:34.513676 | orchestrator | 2026-04-06 04:05:34 | INFO  | Flavor SCS-8V-32-100s created 2026-04-06 04:05:37.061667 | orchestrator | 2026-04-06 04:05:37 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-06 04:05:47.201380 | orchestrator | 2026-04-06 04:05:47 | INFO  | Task a306d890-cb6a-4819-894b-88d894883453 (bootstrap-basic) was prepared for execution. 2026-04-06 04:05:47.201475 | orchestrator | 2026-04-06 04:05:47 | INFO  | It takes a moment until task a306d890-cb6a-4819-894b-88d894883453 (bootstrap-basic) has been started and output is visible here. 2026-04-06 04:06:35.951115 | orchestrator | 2026-04-06 04:06:35.951229 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-06 04:06:35.951246 | orchestrator | 2026-04-06 04:06:35.951260 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 04:06:35.951273 | orchestrator | Monday 06 April 2026 04:05:52 +0000 (0:00:00.094) 0:00:00.094 ********** 2026-04-06 04:06:35.951285 | orchestrator | ok: [localhost] 2026-04-06 04:06:35.951297 | orchestrator | 2026-04-06 04:06:35.951309 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-06 04:06:35.951321 | orchestrator | Monday 06 April 2026 04:05:54 +0000 (0:00:02.044) 0:00:02.138 ********** 2026-04-06 04:06:35.951332 | orchestrator | ok: [localhost] 2026-04-06 04:06:35.951344 | orchestrator | 2026-04-06 04:06:35.951355 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-06 04:06:35.951367 | orchestrator | Monday 06 April 2026 04:06:02 +0000 (0:00:08.260) 0:00:10.399 ********** 2026-04-06 04:06:35.951378 | orchestrator | changed: [localhost] 2026-04-06 04:06:35.951390 | orchestrator | 2026-04-06 04:06:35.951402 | orchestrator | TASK [Create public network] *************************************************** 2026-04-06 04:06:35.951414 | orchestrator | Monday 06 April 2026 04:06:09 +0000 (0:00:07.007) 0:00:17.406 ********** 2026-04-06 04:06:35.951425 | orchestrator | changed: [localhost] 2026-04-06 04:06:35.951437 | orchestrator | 2026-04-06 04:06:35.951448 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-06 04:06:35.951459 | orchestrator | Monday 06 April 2026 04:06:15 +0000 (0:00:05.858) 0:00:23.265 ********** 2026-04-06 04:06:35.951475 | orchestrator | changed: [localhost] 2026-04-06 04:06:35.951487 | orchestrator | 2026-04-06 04:06:35.951498 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-06 04:06:35.951510 | orchestrator | Monday 06 April 2026 04:06:22 +0000 (0:00:07.064) 0:00:30.330 ********** 2026-04-06 04:06:35.951521 | orchestrator | changed: [localhost] 2026-04-06 04:06:35.951532 | orchestrator | 2026-04-06 04:06:35.951544 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-06 04:06:35.951555 | orchestrator | Monday 06 April 2026 04:06:27 +0000 (0:00:04.815) 0:00:35.145 ********** 2026-04-06 04:06:35.951567 | orchestrator | changed: [localhost] 2026-04-06 04:06:35.951599 | orchestrator | 2026-04-06 04:06:35.951611 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-06 04:06:35.951643 | orchestrator | Monday 06 April 2026 04:06:31 +0000 (0:00:04.385) 0:00:39.530 ********** 2026-04-06 04:06:35.951655 | orchestrator | ok: [localhost] 2026-04-06 04:06:35.951667 | orchestrator | 2026-04-06 04:06:35.951678 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:06:35.951696 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 04:06:35.951716 | orchestrator | 2026-04-06 04:06:35.951736 | orchestrator | 2026-04-06 04:06:35.951755 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:06:35.951774 | orchestrator | Monday 06 April 2026 04:06:35 +0000 (0:00:03.980) 0:00:43.510 ********** 2026-04-06 04:06:35.951792 | orchestrator | =============================================================================== 2026-04-06 04:06:35.951811 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.26s 2026-04-06 04:06:35.951827 | orchestrator | Set public network to default ------------------------------------------- 7.06s 2026-04-06 04:06:35.951844 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.01s 2026-04-06 04:06:35.951863 | orchestrator | Create public network --------------------------------------------------- 5.86s 2026-04-06 04:06:35.951914 | orchestrator | Create public subnet ---------------------------------------------------- 4.82s 2026-04-06 04:06:35.951932 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.39s 2026-04-06 04:06:35.951945 | orchestrator | Create manager role ----------------------------------------------------- 3.98s 2026-04-06 04:06:35.951956 | orchestrator | Gathering Facts --------------------------------------------------------- 2.04s 2026-04-06 04:06:38.645752 | orchestrator | 2026-04-06 04:06:38 | INFO  | It takes a moment until task f998410b-6782-417d-a19d-768718cc650f (image-manager) has been started and output is visible here. 2026-04-06 04:07:21.784111 | orchestrator | 2026-04-06 04:06:41 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-06 04:07:21.784241 | orchestrator | 2026-04-06 04:06:41 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-06 04:07:21.784255 | orchestrator | 2026-04-06 04:06:41 | INFO  | Importing image Cirros 0.6.2 2026-04-06 04:07:21.784263 | orchestrator | 2026-04-06 04:06:41 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-06 04:07:21.784271 | orchestrator | 2026-04-06 04:06:43 | INFO  | Waiting for image to leave queued state... 2026-04-06 04:07:21.784278 | orchestrator | 2026-04-06 04:06:45 | INFO  | Waiting for import to complete... 2026-04-06 04:07:21.784285 | orchestrator | 2026-04-06 04:06:56 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-06 04:07:21.784292 | orchestrator | 2026-04-06 04:06:56 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-06 04:07:21.784298 | orchestrator | 2026-04-06 04:06:56 | INFO  | Setting internal_version = 0.6.2 2026-04-06 04:07:21.784305 | orchestrator | 2026-04-06 04:06:56 | INFO  | Setting image_original_user = cirros 2026-04-06 04:07:21.784312 | orchestrator | 2026-04-06 04:06:56 | INFO  | Adding tag os:cirros 2026-04-06 04:07:21.784319 | orchestrator | 2026-04-06 04:06:56 | INFO  | Setting property architecture: x86_64 2026-04-06 04:07:21.784335 | orchestrator | 2026-04-06 04:06:56 | INFO  | Setting property hw_disk_bus: scsi 2026-04-06 04:07:21.784342 | orchestrator | 2026-04-06 04:06:57 | INFO  | Setting property hw_rng_model: virtio 2026-04-06 04:07:21.784349 | orchestrator | 2026-04-06 04:06:57 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-06 04:07:21.784365 | orchestrator | 2026-04-06 04:06:57 | INFO  | Setting property hw_watchdog_action: reset 2026-04-06 04:07:21.784376 | orchestrator | 2026-04-06 04:06:58 | INFO  | Setting property hypervisor_type: qemu 2026-04-06 04:07:21.784387 | orchestrator | 2026-04-06 04:06:58 | INFO  | Setting property os_distro: cirros 2026-04-06 04:07:21.784405 | orchestrator | 2026-04-06 04:06:58 | INFO  | Setting property os_purpose: minimal 2026-04-06 04:07:21.784415 | orchestrator | 2026-04-06 04:06:58 | INFO  | Setting property replace_frequency: never 2026-04-06 04:07:21.784425 | orchestrator | 2026-04-06 04:06:59 | INFO  | Setting property uuid_validity: none 2026-04-06 04:07:21.784434 | orchestrator | 2026-04-06 04:06:59 | INFO  | Setting property provided_until: none 2026-04-06 04:07:21.784444 | orchestrator | 2026-04-06 04:06:59 | INFO  | Setting property image_description: Cirros 2026-04-06 04:07:21.784454 | orchestrator | 2026-04-06 04:06:59 | INFO  | Setting property image_name: Cirros 2026-04-06 04:07:21.784464 | orchestrator | 2026-04-06 04:07:00 | INFO  | Setting property internal_version: 0.6.2 2026-04-06 04:07:21.784474 | orchestrator | 2026-04-06 04:07:00 | INFO  | Setting property image_original_user: cirros 2026-04-06 04:07:21.784507 | orchestrator | 2026-04-06 04:07:00 | INFO  | Setting property os_version: 0.6.2 2026-04-06 04:07:21.784528 | orchestrator | 2026-04-06 04:07:00 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-06 04:07:21.784541 | orchestrator | 2026-04-06 04:07:01 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-06 04:07:21.784548 | orchestrator | 2026-04-06 04:07:01 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-06 04:07:21.784555 | orchestrator | 2026-04-06 04:07:01 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-06 04:07:21.784561 | orchestrator | 2026-04-06 04:07:01 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-06 04:07:21.784567 | orchestrator | 2026-04-06 04:07:01 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-06 04:07:21.784576 | orchestrator | 2026-04-06 04:07:01 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-06 04:07:21.784605 | orchestrator | 2026-04-06 04:07:01 | INFO  | Importing image Cirros 0.6.3 2026-04-06 04:07:21.784613 | orchestrator | 2026-04-06 04:07:01 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-06 04:07:21.784619 | orchestrator | 2026-04-06 04:07:02 | INFO  | Waiting for image to leave queued state... 2026-04-06 04:07:21.784626 | orchestrator | 2026-04-06 04:07:04 | INFO  | Waiting for import to complete... 2026-04-06 04:07:21.784647 | orchestrator | 2026-04-06 04:07:14 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-06 04:07:21.784655 | orchestrator | 2026-04-06 04:07:15 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-06 04:07:21.784663 | orchestrator | 2026-04-06 04:07:15 | INFO  | Setting internal_version = 0.6.3 2026-04-06 04:07:21.784671 | orchestrator | 2026-04-06 04:07:15 | INFO  | Setting image_original_user = cirros 2026-04-06 04:07:21.784679 | orchestrator | 2026-04-06 04:07:15 | INFO  | Adding tag os:cirros 2026-04-06 04:07:21.784686 | orchestrator | 2026-04-06 04:07:15 | INFO  | Setting property architecture: x86_64 2026-04-06 04:07:21.784694 | orchestrator | 2026-04-06 04:07:15 | INFO  | Setting property hw_disk_bus: scsi 2026-04-06 04:07:21.784701 | orchestrator | 2026-04-06 04:07:16 | INFO  | Setting property hw_rng_model: virtio 2026-04-06 04:07:21.784709 | orchestrator | 2026-04-06 04:07:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-06 04:07:21.784716 | orchestrator | 2026-04-06 04:07:16 | INFO  | Setting property hw_watchdog_action: reset 2026-04-06 04:07:21.784724 | orchestrator | 2026-04-06 04:07:17 | INFO  | Setting property hypervisor_type: qemu 2026-04-06 04:07:21.784731 | orchestrator | 2026-04-06 04:07:17 | INFO  | Setting property os_distro: cirros 2026-04-06 04:07:21.784739 | orchestrator | 2026-04-06 04:07:17 | INFO  | Setting property os_purpose: minimal 2026-04-06 04:07:21.784746 | orchestrator | 2026-04-06 04:07:18 | INFO  | Setting property replace_frequency: never 2026-04-06 04:07:21.784754 | orchestrator | 2026-04-06 04:07:18 | INFO  | Setting property uuid_validity: none 2026-04-06 04:07:21.784761 | orchestrator | 2026-04-06 04:07:18 | INFO  | Setting property provided_until: none 2026-04-06 04:07:21.784768 | orchestrator | 2026-04-06 04:07:18 | INFO  | Setting property image_description: Cirros 2026-04-06 04:07:21.784776 | orchestrator | 2026-04-06 04:07:19 | INFO  | Setting property image_name: Cirros 2026-04-06 04:07:21.784783 | orchestrator | 2026-04-06 04:07:19 | INFO  | Setting property internal_version: 0.6.3 2026-04-06 04:07:21.784797 | orchestrator | 2026-04-06 04:07:19 | INFO  | Setting property image_original_user: cirros 2026-04-06 04:07:21.784805 | orchestrator | 2026-04-06 04:07:19 | INFO  | Setting property os_version: 0.6.3 2026-04-06 04:07:21.784812 | orchestrator | 2026-04-06 04:07:20 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-06 04:07:21.784820 | orchestrator | 2026-04-06 04:07:20 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-06 04:07:21.784828 | orchestrator | 2026-04-06 04:07:20 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-06 04:07:21.784835 | orchestrator | 2026-04-06 04:07:20 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-06 04:07:21.784843 | orchestrator | 2026-04-06 04:07:20 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-06 04:07:22.195036 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-06 04:07:24.750486 | orchestrator | 2026-04-06 04:07:24 | INFO  | date: 2026-04-06 2026-04-06 04:07:24.750656 | orchestrator | 2026-04-06 04:07:24 | INFO  | image: octavia-amphora-haproxy-2024.2.20260406.qcow2 2026-04-06 04:07:24.750701 | orchestrator | 2026-04-06 04:07:24 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260406.qcow2 2026-04-06 04:07:24.750718 | orchestrator | 2026-04-06 04:07:24 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260406.qcow2.CHECKSUM 2026-04-06 04:07:24.881614 | orchestrator | 2026-04-06 04:07:24 | INFO  | checksum: 3f9899e9aa23b19857b0120b3f03cecbbd707cd89f3778f002b8e98238de2633 2026-04-06 04:07:24.964149 | orchestrator | 2026-04-06 04:07:24 | INFO  | It takes a moment until task 91f4e79a-f213-4534-80ec-f6573486ef68 (image-manager) has been started and output is visible here. 2026-04-06 04:08:38.427211 | orchestrator | 2026-04-06 04:07:27 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-06' 2026-04-06 04:08:38.427321 | orchestrator | 2026-04-06 04:07:27 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260406.qcow2: 200 2026-04-06 04:08:38.427337 | orchestrator | 2026-04-06 04:07:27 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-06 2026-04-06 04:08:38.427347 | orchestrator | 2026-04-06 04:07:27 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260406.qcow2 2026-04-06 04:08:38.427357 | orchestrator | 2026-04-06 04:07:28 | INFO  | Waiting for image to leave queued state... 2026-04-06 04:08:38.427365 | orchestrator | 2026-04-06 04:07:30 | INFO  | Waiting for import to complete... 2026-04-06 04:08:38.427374 | orchestrator | 2026-04-06 04:07:40 | INFO  | Waiting for import to complete... 2026-04-06 04:08:38.427382 | orchestrator | 2026-04-06 04:07:50 | INFO  | Waiting for import to complete... 2026-04-06 04:08:38.427390 | orchestrator | 2026-04-06 04:08:01 | INFO  | Waiting for import to complete... 2026-04-06 04:08:38.427400 | orchestrator | 2026-04-06 04:08:11 | INFO  | Waiting for import to complete... 2026-04-06 04:08:38.427409 | orchestrator | 2026-04-06 04:08:21 | INFO  | Waiting for import to complete... 2026-04-06 04:08:38.427417 | orchestrator | 2026-04-06 04:08:31 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-06' successfully completed, reloading images 2026-04-06 04:08:38.427426 | orchestrator | 2026-04-06 04:08:32 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-06' 2026-04-06 04:08:38.427454 | orchestrator | 2026-04-06 04:08:32 | INFO  | Setting internal_version = 2026-04-06 2026-04-06 04:08:38.427462 | orchestrator | 2026-04-06 04:08:32 | INFO  | Setting image_original_user = ubuntu 2026-04-06 04:08:38.427471 | orchestrator | 2026-04-06 04:08:32 | INFO  | Adding tag amphora 2026-04-06 04:08:38.427479 | orchestrator | 2026-04-06 04:08:32 | INFO  | Adding tag os:ubuntu 2026-04-06 04:08:38.427487 | orchestrator | 2026-04-06 04:08:32 | INFO  | Setting property architecture: x86_64 2026-04-06 04:08:38.427495 | orchestrator | 2026-04-06 04:08:33 | INFO  | Setting property hw_disk_bus: scsi 2026-04-06 04:08:38.427503 | orchestrator | 2026-04-06 04:08:33 | INFO  | Setting property hw_rng_model: virtio 2026-04-06 04:08:38.427511 | orchestrator | 2026-04-06 04:08:33 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-06 04:08:38.427519 | orchestrator | 2026-04-06 04:08:34 | INFO  | Setting property hw_watchdog_action: reset 2026-04-06 04:08:38.427527 | orchestrator | 2026-04-06 04:08:34 | INFO  | Setting property hypervisor_type: qemu 2026-04-06 04:08:38.427535 | orchestrator | 2026-04-06 04:08:34 | INFO  | Setting property os_distro: ubuntu 2026-04-06 04:08:38.427543 | orchestrator | 2026-04-06 04:08:34 | INFO  | Setting property replace_frequency: quarterly 2026-04-06 04:08:38.427551 | orchestrator | 2026-04-06 04:08:35 | INFO  | Setting property uuid_validity: last-1 2026-04-06 04:08:38.427559 | orchestrator | 2026-04-06 04:08:35 | INFO  | Setting property provided_until: none 2026-04-06 04:08:38.427567 | orchestrator | 2026-04-06 04:08:35 | INFO  | Setting property os_purpose: network 2026-04-06 04:08:38.427587 | orchestrator | 2026-04-06 04:08:35 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-06 04:08:38.427658 | orchestrator | 2026-04-06 04:08:36 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-06 04:08:38.427669 | orchestrator | 2026-04-06 04:08:36 | INFO  | Setting property internal_version: 2026-04-06 2026-04-06 04:08:38.427682 | orchestrator | 2026-04-06 04:08:36 | INFO  | Setting property image_original_user: ubuntu 2026-04-06 04:08:38.427695 | orchestrator | 2026-04-06 04:08:37 | INFO  | Setting property os_version: 2026-04-06 2026-04-06 04:08:38.427708 | orchestrator | 2026-04-06 04:08:37 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260406.qcow2 2026-04-06 04:08:38.427721 | orchestrator | 2026-04-06 04:08:37 | INFO  | Setting property image_build_date: 2026-04-06 2026-04-06 04:08:38.427757 | orchestrator | 2026-04-06 04:08:37 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-06' 2026-04-06 04:08:38.427781 | orchestrator | 2026-04-06 04:08:37 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-06' 2026-04-06 04:08:38.427815 | orchestrator | 2026-04-06 04:08:38 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-06 04:08:38.427830 | orchestrator | 2026-04-06 04:08:38 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-06 04:08:38.427844 | orchestrator | 2026-04-06 04:08:38 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-06 04:08:38.427857 | orchestrator | 2026-04-06 04:08:38 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-06 04:08:39.148781 | orchestrator | ok: Runtime: 0:03:15.507527 2026-04-06 04:08:39.174010 | 2026-04-06 04:08:39.174185 | TASK [Run checks] 2026-04-06 04:08:39.929812 | orchestrator | + set -e 2026-04-06 04:08:39.930130 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 04:08:39.930163 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 04:08:39.930185 | orchestrator | ++ INTERACTIVE=false 2026-04-06 04:08:39.930199 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 04:08:39.930212 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 04:08:39.930226 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-06 04:08:39.930818 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-06 04:08:39.937780 | orchestrator | 2026-04-06 04:08:39.937887 | orchestrator | # CHECK 2026-04-06 04:08:39.937904 | orchestrator | 2026-04-06 04:08:39.937917 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 04:08:39.937934 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 04:08:39.937946 | orchestrator | + echo 2026-04-06 04:08:39.937957 | orchestrator | + echo '# CHECK' 2026-04-06 04:08:39.937968 | orchestrator | + echo 2026-04-06 04:08:39.937983 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-06 04:08:39.938642 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-06 04:08:40.002577 | orchestrator | 2026-04-06 04:08:40.002713 | orchestrator | ## Containers @ testbed-manager 2026-04-06 04:08:40.002726 | orchestrator | 2026-04-06 04:08:40.002738 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-06 04:08:40.002746 | orchestrator | + echo 2026-04-06 04:08:40.002754 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-06 04:08:40.002763 | orchestrator | + echo 2026-04-06 04:08:40.002770 | orchestrator | + osism container testbed-manager ps 2026-04-06 04:08:42.339918 | orchestrator | 2026-04-06 04:08:42 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-06 04:08:42.757304 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-06 04:08:42.757473 | orchestrator | 52d7db777867 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-04-06 04:08:42.757515 | orchestrator | be995142a633 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-04-06 04:08:42.757549 | orchestrator | b1b1199f5243 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-06 04:08:42.757569 | orchestrator | 65b40296956b registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-06 04:08:42.757619 | orchestrator | 898c10b301b4 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-04-06 04:08:42.757639 | orchestrator | 76f4b9ecc788 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" About an hour ago Up About an hour cephclient 2026-04-06 04:08:42.757651 | orchestrator | de06a1a37512 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-06 04:08:42.757664 | orchestrator | d483a8ec44ae registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-06 04:08:42.757701 | orchestrator | 9b7f0d3c432e registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-06 04:08:42.757714 | orchestrator | 99d4fad4b516 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-04-06 04:08:42.757725 | orchestrator | e9ce20ec447d phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-04-06 04:08:42.757737 | orchestrator | cda30e19adea registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-04-06 04:08:42.757749 | orchestrator | f8716af02d81 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-04-06 04:08:42.757771 | orchestrator | 52996163e063 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-06 04:08:42.757810 | orchestrator | 52c3e892008e registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-04-06 04:08:42.757824 | orchestrator | 33e283c428ee registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-04-06 04:08:42.757836 | orchestrator | 96a1ea79f3af registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-04-06 04:08:42.757847 | orchestrator | 661923d0665b registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-04-06 04:08:42.757858 | orchestrator | 22636aafc442 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-04-06 04:08:42.757870 | orchestrator | c4b5b426bab9 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-06 04:08:42.757881 | orchestrator | 6488764b747c registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-04-06 04:08:42.757893 | orchestrator | 4f3b8cb3c0a9 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-04-06 04:08:42.757913 | orchestrator | 87c1a9bd3504 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-04-06 04:08:42.757931 | orchestrator | 39d5286a8082 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-04-06 04:08:42.758012 | orchestrator | 0f33d93fbc08 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-04-06 04:08:42.758664 | orchestrator | 6d0f336a365a registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-06 04:08:42.758701 | orchestrator | f630f6b63053 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-04-06 04:08:42.758720 | orchestrator | 9df28d762c4e registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-06 04:08:42.758752 | orchestrator | 81e8d502a3cb registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-06 04:08:42.760554 | orchestrator | 81ea1e4eb5b2 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-06 04:08:43.173843 | orchestrator | 2026-04-06 04:08:43.173945 | orchestrator | ## Images @ testbed-manager 2026-04-06 04:08:43.173961 | orchestrator | 2026-04-06 04:08:43.173974 | orchestrator | + echo 2026-04-06 04:08:43.173985 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-06 04:08:43.173997 | orchestrator | + echo 2026-04-06 04:08:43.174013 | orchestrator | + osism container testbed-manager images 2026-04-06 04:08:45.824367 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-06 04:08:45.824484 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 0455d6e4cec5 24 hours ago 239MB 2026-04-06 04:08:45.824500 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-06 04:08:45.824511 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-06 04:08:45.824521 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-06 04:08:45.824534 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-06 04:08:45.824545 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-06 04:08:45.824555 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-06 04:08:45.824565 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-06 04:08:45.824574 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-06 04:08:45.824642 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-06 04:08:45.824654 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-06 04:08:45.824664 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-06 04:08:45.824673 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-06 04:08:45.824683 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-06 04:08:45.824693 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-06 04:08:45.824703 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-06 04:08:45.824713 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-06 04:08:45.824723 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-06 04:08:45.824733 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-06 04:08:45.824742 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-04-06 04:08:45.824752 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-06 04:08:45.824762 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-06 04:08:45.824772 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-06 04:08:45.824782 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-06 04:08:45.824791 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-06 04:08:46.182826 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-06 04:08:46.182937 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-06 04:08:46.250072 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-06 04:08:46.250160 | orchestrator | + echo 2026-04-06 04:08:46.250175 | orchestrator | 2026-04-06 04:08:46.250187 | orchestrator | ## Containers @ testbed-node-0 2026-04-06 04:08:46.250199 | orchestrator | 2026-04-06 04:08:46.250210 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-06 04:08:46.250222 | orchestrator | + echo 2026-04-06 04:08:46.250233 | orchestrator | + osism container testbed-node-0 ps 2026-04-06 04:08:48.938225 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-06 04:08:48.938328 | orchestrator | b6a40f5068b0 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-06 04:08:48.938344 | orchestrator | d50a4de7b93e registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-04-06 04:08:48.938355 | orchestrator | 8072e5862517 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-04-06 04:08:48.938365 | orchestrator | beb78282c159 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-06 04:08:48.938401 | orchestrator | 44f9bcf3ab08 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-06 04:08:48.938419 | orchestrator | 766a929d45ba registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-06 04:08:48.938445 | orchestrator | c8a4bd3a7466 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-06 04:08:48.938464 | orchestrator | b1c77cf6aa90 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-06 04:08:48.938480 | orchestrator | 7149bfd79b64 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-06 04:08:48.938495 | orchestrator | 1757e3878b59 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-04-06 04:08:48.938512 | orchestrator | d2be2204ece1 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-06 04:08:48.938527 | orchestrator | e2e99354dd80 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-06 04:08:48.938541 | orchestrator | 2ba98a3a55e1 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-06 04:08:48.938555 | orchestrator | e1af52f55050 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-06 04:08:48.938571 | orchestrator | 672f2f1f08b8 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-06 04:08:48.938588 | orchestrator | bb73c6b231bc registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-06 04:08:48.938678 | orchestrator | 4c6059cbc9ce registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-06 04:08:48.938694 | orchestrator | 5c6447f56ef6 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-06 04:08:48.938708 | orchestrator | 43c70f40b1ea registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-06 04:08:48.938746 | orchestrator | 1ba7f823bf01 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-06 04:08:48.938762 | orchestrator | f54d26cbfae8 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-06 04:08:48.938776 | orchestrator | 94be667135d6 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-06 04:08:48.938805 | orchestrator | b8360126ae0f registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-04-06 04:08:48.938821 | orchestrator | 42ef35c14303 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-06 04:08:48.938836 | orchestrator | ba3c2c160ee5 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-06 04:08:48.938856 | orchestrator | 39d063ba3d1e registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-06 04:08:48.938873 | orchestrator | a92e7ac1738f registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-06 04:08:48.938888 | orchestrator | f6069f826716 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-04-06 04:08:48.938903 | orchestrator | 99af58b70821 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-06 04:08:48.938917 | orchestrator | 1c3f96450027 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-06 04:08:48.938933 | orchestrator | 310c19daef12 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-06 04:08:48.938949 | orchestrator | f0b59599e167 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-06 04:08:48.938964 | orchestrator | f944c94ae559 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-04-06 04:08:48.938979 | orchestrator | ff97ef972b99 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-06 04:08:48.938993 | orchestrator | 3301b79d27d3 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-06 04:08:48.939009 | orchestrator | 5dce6f6b8fcf registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-06 04:08:48.939024 | orchestrator | 7d004b4463d5 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-06 04:08:48.939040 | orchestrator | 467984dae45d registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-06 04:08:48.939065 | orchestrator | 0dbc5f0cdd31 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-06 04:08:48.939107 | orchestrator | 56bcba1ae960 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-04-06 04:08:48.939124 | orchestrator | 57243af3ae62 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-04-06 04:08:48.939139 | orchestrator | 313979207f08 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-06 04:08:48.939154 | orchestrator | c967da5eb944 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-04-06 04:08:48.939169 | orchestrator | 6140ebbe2cdb registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 46 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-04-06 04:08:48.939186 | orchestrator | 8012ce5c83d0 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) neutron_server 2026-04-06 04:08:48.939201 | orchestrator | 3dea1d0f92cb registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-04-06 04:08:48.939217 | orchestrator | 2f4c522396d0 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-04-06 04:08:48.939234 | orchestrator | b132465c6e58 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-04-06 04:08:48.939249 | orchestrator | 57d1c472bd56 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-04-06 04:08:48.939266 | orchestrator | 1c9f82174cec registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-0 2026-04-06 04:08:48.939282 | orchestrator | 53ca4c848616 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-06 04:08:48.939308 | orchestrator | 7ab3f7ebb0fe registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-04-06 04:08:48.939328 | orchestrator | 426884552b9f registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-06 04:08:48.939344 | orchestrator | c7f657c22358 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-06 04:08:48.939360 | orchestrator | 6aa3d595ea52 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-06 04:08:48.939377 | orchestrator | 434102902c0f registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-06 04:08:48.939393 | orchestrator | 418c8e9bff5b registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-06 04:08:48.939421 | orchestrator | 22f3edcd50c6 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-06 04:08:48.939437 | orchestrator | d727b90005c7 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-06 04:08:48.939464 | orchestrator | 4b65eda7f719 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-06 04:08:48.939482 | orchestrator | ebb1ce947a4e registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-04-06 04:08:48.939499 | orchestrator | e53227900e51 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-04-06 04:08:48.939516 | orchestrator | 9914f03628ab registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-04-06 04:08:48.939531 | orchestrator | 8bedae4daea5 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-06 04:08:48.939547 | orchestrator | 30e960baf6d5 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-06 04:08:48.939563 | orchestrator | cfed8b9192ad registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-06 04:08:48.939581 | orchestrator | e6527f5ae3bd registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-06 04:08:48.939623 | orchestrator | 830f8b50a318 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-06 04:08:48.939643 | orchestrator | d9526f06b4d4 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-06 04:08:48.939661 | orchestrator | e09e399fcea2 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-06 04:08:48.939678 | orchestrator | 96f96c1b2b83 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-06 04:08:49.352102 | orchestrator | 2026-04-06 04:08:49.352212 | orchestrator | ## Images @ testbed-node-0 2026-04-06 04:08:49.352231 | orchestrator | 2026-04-06 04:08:49.352244 | orchestrator | + echo 2026-04-06 04:08:49.352257 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-06 04:08:49.352270 | orchestrator | + echo 2026-04-06 04:08:49.352282 | orchestrator | + osism container testbed-node-0 images 2026-04-06 04:08:51.995357 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-06 04:08:51.995465 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-06 04:08:51.995480 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-06 04:08:51.995491 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-06 04:08:51.995537 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-06 04:08:51.995548 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-06 04:08:51.995558 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-06 04:08:51.995567 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-06 04:08:51.995577 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-06 04:08:51.995587 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-06 04:08:51.995675 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-06 04:08:51.995685 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-06 04:08:51.995695 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-06 04:08:51.995705 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-06 04:08:51.995714 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-06 04:08:51.995724 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-06 04:08:51.995733 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-06 04:08:51.995743 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-06 04:08:51.995764 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-06 04:08:51.995774 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-06 04:08:51.995784 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-06 04:08:51.995794 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-06 04:08:51.995804 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-06 04:08:51.995814 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-06 04:08:51.995823 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-06 04:08:51.995833 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-06 04:08:51.995843 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-06 04:08:51.995852 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-06 04:08:51.995863 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-06 04:08:51.995880 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-06 04:08:51.995907 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-06 04:08:51.995924 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-06 04:08:51.995965 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-06 04:08:51.995982 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-06 04:08:51.996000 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-06 04:08:51.996016 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-06 04:08:51.996032 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-06 04:08:51.996049 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-06 04:08:51.996066 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-06 04:08:51.996082 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-06 04:08:51.996097 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-06 04:08:51.996113 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-06 04:08:51.996130 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-06 04:08:51.996146 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-06 04:08:51.996164 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-06 04:08:51.996181 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-06 04:08:51.996199 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-06 04:08:51.996227 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-06 04:08:51.996244 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-06 04:08:51.996260 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-06 04:08:51.996271 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-06 04:08:51.996280 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-06 04:08:51.996290 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-06 04:08:51.996299 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-06 04:08:51.996309 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-06 04:08:51.996319 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-06 04:08:51.996337 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-06 04:08:51.996347 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-06 04:08:51.996357 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-06 04:08:51.996366 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-06 04:08:51.996376 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-06 04:08:51.996385 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-06 04:08:51.996395 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-06 04:08:51.996405 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-06 04:08:51.996423 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-06 04:08:51.996433 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-06 04:08:51.996443 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-06 04:08:51.996452 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-06 04:08:51.996462 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-06 04:08:51.996472 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-06 04:08:52.377660 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-06 04:08:52.378122 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-06 04:08:52.439329 | orchestrator | 2026-04-06 04:08:52.439432 | orchestrator | ## Containers @ testbed-node-1 2026-04-06 04:08:52.439457 | orchestrator | 2026-04-06 04:08:52.439471 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-06 04:08:52.439483 | orchestrator | + echo 2026-04-06 04:08:52.439496 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-06 04:08:52.439512 | orchestrator | + echo 2026-04-06 04:08:52.439524 | orchestrator | + osism container testbed-node-1 ps 2026-04-06 04:08:55.302561 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-06 04:08:55.302704 | orchestrator | 90da2edb8491 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-06 04:08:55.302717 | orchestrator | 67219f569a3a registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-06 04:08:55.302725 | orchestrator | 8cc933a38114 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-04-06 04:08:55.302732 | orchestrator | c0dd42ba73d1 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-06 04:08:55.302758 | orchestrator | 04420c9137bf registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-06 04:08:55.302784 | orchestrator | f3e8e920282e registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-06 04:08:55.302792 | orchestrator | d6d95292b5c5 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-06 04:08:55.302806 | orchestrator | dad5a3078742 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-06 04:08:55.302814 | orchestrator | b2a2318a6d77 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-06 04:08:55.302821 | orchestrator | 38dacaf5c255 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-04-06 04:08:55.302828 | orchestrator | 4dbf7dbc3b7c registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-06 04:08:55.302836 | orchestrator | 5f8c8f3a7235 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-06 04:08:55.302843 | orchestrator | 12bb8cc91bee registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-06 04:08:55.302853 | orchestrator | 8b318707aaf3 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-06 04:08:55.302865 | orchestrator | 94d2a79e297d registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-06 04:08:55.302875 | orchestrator | df657e4dfb44 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-06 04:08:55.302883 | orchestrator | e3c4ff4b0a18 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-06 04:08:55.302890 | orchestrator | e9eb3b0499de registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-06 04:08:55.302897 | orchestrator | fc9892eb0ab4 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-06 04:08:55.302930 | orchestrator | 8aec990d2461 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-06 04:08:55.302938 | orchestrator | 0a09c2312655 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-06 04:08:55.302946 | orchestrator | e542151a594e registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-06 04:08:55.302953 | orchestrator | 418cf3764dd2 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-04-06 04:08:55.302967 | orchestrator | 2cbbc3f9d6f7 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-06 04:08:55.302974 | orchestrator | b5019c3f094b registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-06 04:08:55.302982 | orchestrator | 7242d0f6693a registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-06 04:08:55.302989 | orchestrator | e90608f736e9 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-06 04:08:55.303001 | orchestrator | 7f459f24696a registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-06 04:08:55.303009 | orchestrator | 4ab3caca889b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-06 04:08:55.303016 | orchestrator | 49e38bd431f9 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-06 04:08:55.303206 | orchestrator | 44b06128c048 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-06 04:08:55.303228 | orchestrator | e3f4327ddf44 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-06 04:08:55.303240 | orchestrator | 7a9ee51d1e90 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-06 04:08:55.303253 | orchestrator | a0806db13c5d registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-06 04:08:55.303265 | orchestrator | 8b0123febc6f registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-06 04:08:55.303277 | orchestrator | a409991f0f0c registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-04-06 04:08:55.303288 | orchestrator | 46642f99c2a8 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-06 04:08:55.303299 | orchestrator | 8ed1c2ffc24d registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-06 04:08:55.303310 | orchestrator | e5e5913fcd2c registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-06 04:08:55.303322 | orchestrator | 6b29d2e0c931 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-04-06 04:08:55.303333 | orchestrator | ec43af094514 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-04-06 04:08:55.303354 | orchestrator | 682c4e840402 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-06 04:08:55.303366 | orchestrator | 6538fe1ebd72 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-04-06 04:08:55.303378 | orchestrator | 21f7c260b6fc registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-04-06 04:08:55.303389 | orchestrator | 0b6e4c46ef1c registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-04-06 04:08:55.303401 | orchestrator | f78175c7cf0c registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-04-06 04:08:55.303412 | orchestrator | 30f6a4524910 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-04-06 04:08:55.303424 | orchestrator | baf2589c69c0 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-04-06 04:08:55.303435 | orchestrator | d04b52e60d00 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-04-06 04:08:55.303447 | orchestrator | cf06746505f1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-1 2026-04-06 04:08:55.303459 | orchestrator | 2a5357901a4d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-06 04:08:55.303479 | orchestrator | 46d5ea15fe96 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-04-06 04:08:55.303490 | orchestrator | 09e7e9a75f2a registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-06 04:08:55.303501 | orchestrator | 44542a6064be registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-06 04:08:55.303519 | orchestrator | c35ef846ab84 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-06 04:08:55.303530 | orchestrator | bb87443ef4bf registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-06 04:08:55.303541 | orchestrator | 35eec4ff5827 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-06 04:08:55.303553 | orchestrator | 959c4f324c9f registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-06 04:08:55.303573 | orchestrator | 6821f129b9e0 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-06 04:08:55.303584 | orchestrator | c864ddbf221a registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-06 04:08:55.303659 | orchestrator | cb5d645e2a85 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-04-06 04:08:55.303673 | orchestrator | 5b0d838ef7f1 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-04-06 04:08:55.303686 | orchestrator | 50131fb00709 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-06 04:08:55.303698 | orchestrator | d1a45be0f749 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-06 04:08:55.303710 | orchestrator | 2a9f42963405 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-06 04:08:55.303722 | orchestrator | 84431684417f registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-06 04:08:55.303732 | orchestrator | 9cb6a3084de9 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-06 04:08:55.303739 | orchestrator | dda50c877935 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-06 04:08:55.303750 | orchestrator | f749bebfe3f4 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-06 04:08:55.303763 | orchestrator | f7ccd40e1e11 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-06 04:08:55.303770 | orchestrator | fd80c60ce231 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-06 04:08:55.681941 | orchestrator | 2026-04-06 04:08:55.682075 | orchestrator | ## Images @ testbed-node-1 2026-04-06 04:08:55.682091 | orchestrator | 2026-04-06 04:08:55.682101 | orchestrator | + echo 2026-04-06 04:08:55.682111 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-06 04:08:55.682121 | orchestrator | + echo 2026-04-06 04:08:55.682130 | orchestrator | + osism container testbed-node-1 images 2026-04-06 04:08:58.309887 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-06 04:08:58.309983 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-06 04:08:58.310000 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-06 04:08:58.310056 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-06 04:08:58.310071 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-06 04:08:58.310083 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-06 04:08:58.310116 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-06 04:08:58.310123 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-06 04:08:58.310130 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-06 04:08:58.310136 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-06 04:08:58.310142 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-06 04:08:58.310148 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-06 04:08:58.310155 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-06 04:08:58.310161 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-06 04:08:58.310167 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-06 04:08:58.310173 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-06 04:08:58.310180 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-06 04:08:58.310186 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-06 04:08:58.310192 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-06 04:08:58.310198 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-06 04:08:58.310223 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-06 04:08:58.310230 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-06 04:08:58.310236 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-06 04:08:58.310243 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-06 04:08:58.310249 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-06 04:08:58.310255 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-06 04:08:58.310262 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-06 04:08:58.310272 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-06 04:08:58.310278 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-06 04:08:58.310284 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-06 04:08:58.310290 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-06 04:08:58.310297 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-06 04:08:58.310326 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-06 04:08:58.310333 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-06 04:08:58.310339 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-06 04:08:58.310345 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-06 04:08:58.310352 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-06 04:08:58.310358 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-06 04:08:58.310364 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-06 04:08:58.310370 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-06 04:08:58.310377 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-06 04:08:58.310383 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-06 04:08:58.310389 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-06 04:08:58.310395 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-06 04:08:58.310401 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-06 04:08:58.310408 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-06 04:08:58.310414 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-06 04:08:58.310420 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-06 04:08:58.310426 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-06 04:08:58.310433 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-06 04:08:58.310439 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-06 04:08:58.310447 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-06 04:08:58.310454 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-06 04:08:58.310461 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-06 04:08:58.310468 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-06 04:08:58.310476 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-06 04:08:58.310483 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-06 04:08:58.310491 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-06 04:08:58.310504 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-06 04:08:58.310512 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-06 04:08:58.310519 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-06 04:08:58.310526 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-06 04:08:58.310534 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-06 04:08:58.310541 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-06 04:08:58.310552 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-06 04:08:58.310560 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-06 04:08:58.310568 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-06 04:08:58.310575 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-06 04:08:58.310582 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-06 04:08:58.310612 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-06 04:08:58.659475 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-06 04:08:58.659767 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-06 04:08:58.725335 | orchestrator | 2026-04-06 04:08:58.725422 | orchestrator | ## Containers @ testbed-node-2 2026-04-06 04:08:58.725433 | orchestrator | 2026-04-06 04:08:58.725441 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-06 04:08:58.725449 | orchestrator | + echo 2026-04-06 04:08:58.725458 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-06 04:08:58.725466 | orchestrator | + echo 2026-04-06 04:08:58.725474 | orchestrator | + osism container testbed-node-2 ps 2026-04-06 04:09:01.506377 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-06 04:09:01.506507 | orchestrator | 6bbade38e032 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-06 04:09:01.506528 | orchestrator | d6757e989cd5 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-04-06 04:09:01.506542 | orchestrator | 08f4786f9514 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-04-06 04:09:01.506555 | orchestrator | bb5d6a80274c registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-06 04:09:01.506569 | orchestrator | e74e47ceeff8 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-04-06 04:09:01.506583 | orchestrator | a008ed42b92c registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-06 04:09:01.506621 | orchestrator | eb89545bbe14 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-06 04:09:01.506657 | orchestrator | 9a38fde61550 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-06 04:09:01.506672 | orchestrator | ae63d50bf806 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-04-06 04:09:01.506685 | orchestrator | 1da0768d2af8 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-04-06 04:09:01.506697 | orchestrator | 42be10ae85ea registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-04-06 04:09:01.506715 | orchestrator | 87c3b01ecd0f registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-04-06 04:09:01.506728 | orchestrator | 14c576b59a0f registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-04-06 04:09:01.506741 | orchestrator | 3c76aa9f6f6d registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-04-06 04:09:01.506752 | orchestrator | a65ce5bba9cc registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-04-06 04:09:01.506764 | orchestrator | ca88bcae24b1 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) aodh_api 2026-04-06 04:09:01.506777 | orchestrator | 1b23f1f9af61 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-04-06 04:09:01.506789 | orchestrator | 939e30745d1d registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-04-06 04:09:01.506801 | orchestrator | 008a8c796fbf registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-04-06 04:09:01.506833 | orchestrator | ee261a056d8d registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-04-06 04:09:01.506846 | orchestrator | 57332d22062a registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-04-06 04:09:01.506859 | orchestrator | 1cdd34df7caf registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-04-06 04:09:01.506871 | orchestrator | 12b2d0dcb3b4 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-04-06 04:09:01.506883 | orchestrator | ad5e6970b1a8 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-04-06 04:09:01.506904 | orchestrator | 655ef8199851 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-04-06 04:09:01.506918 | orchestrator | a589393de337 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-04-06 04:09:01.506931 | orchestrator | 260c6ab45191 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-04-06 04:09:01.506943 | orchestrator | aef8b0aa10b4 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_api 2026-04-06 04:09:01.506956 | orchestrator | c842949f49ad registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) designate_backend_bind9 2026-04-06 04:09:01.506969 | orchestrator | 40b128d53d93 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_worker 2026-04-06 04:09:01.506981 | orchestrator | d95581fa7aa8 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-04-06 04:09:01.506994 | orchestrator | 0ed4d69185b8 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-04-06 04:09:01.507006 | orchestrator | e60cb3bef404 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_backup 2026-04-06 04:09:01.507019 | orchestrator | c9e09a20ba04 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-04-06 04:09:01.507032 | orchestrator | 5d7f1b6c8717 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-04-06 04:09:01.507044 | orchestrator | 132293fa383e registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-04-06 04:09:01.507057 | orchestrator | 929e29512b65 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-04-06 04:09:01.507069 | orchestrator | d281ad302d98 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_console 2026-04-06 04:09:01.507087 | orchestrator | 1a28a8d4248b registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-04-06 04:09:01.507109 | orchestrator | a8083268a6b9 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-04-06 04:09:01.507123 | orchestrator | a888f8dbe4fe registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-04-06 04:09:01.507136 | orchestrator | 524d462b98d4 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_conductor 2026-04-06 04:09:01.507155 | orchestrator | de708509af3b registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-04-06 04:09:01.507167 | orchestrator | f0c7cc162498 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-04-06 04:09:01.507180 | orchestrator | 4a57c1e603fd registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) neutron_server 2026-04-06 04:09:01.507192 | orchestrator | 5ea0a4773a75 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-04-06 04:09:01.507205 | orchestrator | 77bfe84adfae registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-04-06 04:09:01.507218 | orchestrator | 6d3c10e393ed registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_fernet 2026-04-06 04:09:01.507230 | orchestrator | 5c5b0c558266 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-04-06 04:09:01.507242 | orchestrator | 24ee84109bae registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 59 minutes ago Up 59 minutes ceph-mgr-testbed-node-2 2026-04-06 04:09:01.507254 | orchestrator | 34955141bc67 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-06 04:09:01.507266 | orchestrator | a87eea657fd7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-04-06 04:09:01.507282 | orchestrator | 56aef4e790fd registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-06 04:09:01.507293 | orchestrator | 1449ea15cf6b registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-06 04:09:01.507305 | orchestrator | 4887c0a78ef8 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-06 04:09:01.507316 | orchestrator | 3456f778587f registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-06 04:09:01.507327 | orchestrator | a899b3ca1657 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-06 04:09:01.507338 | orchestrator | b3aa2485b545 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-06 04:09:01.507350 | orchestrator | 45aa4091437e registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-06 04:09:01.507368 | orchestrator | 33d0722b6c79 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-06 04:09:01.507387 | orchestrator | ab82f0800935 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-04-06 04:09:01.507400 | orchestrator | 3104236c3f72 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-04-06 04:09:01.507411 | orchestrator | c020bd32c663 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-06 04:09:01.507422 | orchestrator | c0d89da19628 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-06 04:09:01.507433 | orchestrator | c6a590e8b1e1 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-06 04:09:01.507444 | orchestrator | a86b9978ccc0 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-06 04:09:01.507455 | orchestrator | 6d90333c9ecd registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-06 04:09:01.507467 | orchestrator | a1a25fbacece registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-06 04:09:01.507479 | orchestrator | af4cafcf250a registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-06 04:09:01.507490 | orchestrator | 43a969eef329 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-06 04:09:01.507501 | orchestrator | 72168eef6332 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-06 04:09:01.917848 | orchestrator | 2026-04-06 04:09:01.917944 | orchestrator | ## Images @ testbed-node-2 2026-04-06 04:09:01.917960 | orchestrator | 2026-04-06 04:09:01.917971 | orchestrator | + echo 2026-04-06 04:09:01.917982 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-06 04:09:01.917993 | orchestrator | + echo 2026-04-06 04:09:01.918003 | orchestrator | + osism container testbed-node-2 images 2026-04-06 04:09:04.590828 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-06 04:09:04.590934 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-06 04:09:04.590948 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-06 04:09:04.590961 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-06 04:09:04.590973 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-06 04:09:04.590984 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-06 04:09:04.590995 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-06 04:09:04.591006 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-06 04:09:04.591040 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-06 04:09:04.591051 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-06 04:09:04.591061 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-06 04:09:04.591075 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-06 04:09:04.591086 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-06 04:09:04.591098 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-06 04:09:04.591109 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-06 04:09:04.591136 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-06 04:09:04.591147 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-06 04:09:04.591158 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-06 04:09:04.591169 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-06 04:09:04.591179 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-06 04:09:04.591190 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-06 04:09:04.591201 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-06 04:09:04.591211 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-06 04:09:04.591221 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-06 04:09:04.591231 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-06 04:09:04.591240 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-06 04:09:04.591251 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-06 04:09:04.591262 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-06 04:09:04.591271 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-06 04:09:04.591283 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-06 04:09:04.591293 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-06 04:09:04.591316 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-06 04:09:04.591344 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-06 04:09:04.591355 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-06 04:09:04.591392 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-06 04:09:04.591404 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-06 04:09:04.591415 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-06 04:09:04.591426 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-06 04:09:04.591437 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-06 04:09:04.591449 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-06 04:09:04.591470 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-06 04:09:04.591490 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-06 04:09:04.591503 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-06 04:09:04.591512 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-06 04:09:04.591523 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-06 04:09:04.591533 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-06 04:09:04.591542 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-06 04:09:04.591552 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-06 04:09:04.591562 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-06 04:09:04.591573 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-06 04:09:04.591584 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-06 04:09:04.591612 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-06 04:09:04.591623 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-06 04:09:04.591635 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-06 04:09:04.591647 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-06 04:09:04.591659 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-06 04:09:04.591671 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-06 04:09:04.591683 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-06 04:09:04.591695 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-06 04:09:04.591707 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-06 04:09:04.591727 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-06 04:09:04.591738 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-06 04:09:04.591750 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-06 04:09:04.591763 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-06 04:09:04.591782 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-06 04:09:04.591794 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-06 04:09:04.591805 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-06 04:09:04.591816 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-06 04:09:04.591826 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-06 04:09:04.591837 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-06 04:09:04.981911 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-06 04:09:04.991711 | orchestrator | + set -e 2026-04-06 04:09:04.992728 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 04:09:04.992786 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 04:09:04.992797 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 04:09:04.992805 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 04:09:04.992812 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 04:09:04.992821 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 04:09:04.992830 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 04:09:04.992837 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 04:09:04.992845 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 04:09:04.992853 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 04:09:04.992861 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 04:09:04.992868 | orchestrator | ++ export ARA=false 2026-04-06 04:09:04.992876 | orchestrator | ++ ARA=false 2026-04-06 04:09:04.992884 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 04:09:04.992891 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 04:09:04.992899 | orchestrator | ++ export TEMPEST=false 2026-04-06 04:09:04.992906 | orchestrator | ++ TEMPEST=false 2026-04-06 04:09:04.992914 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 04:09:04.992921 | orchestrator | ++ IS_ZUUL=true 2026-04-06 04:09:04.992929 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 04:09:04.992936 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 04:09:04.992944 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 04:09:04.992951 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 04:09:04.992958 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 04:09:04.992966 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 04:09:04.992974 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 04:09:04.992982 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 04:09:04.992989 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 04:09:04.992996 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 04:09:04.993004 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-06 04:09:04.993011 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-06 04:09:05.004726 | orchestrator | + set -e 2026-04-06 04:09:05.004811 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 04:09:05.004827 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 04:09:05.004841 | orchestrator | ++ INTERACTIVE=false 2026-04-06 04:09:05.004853 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 04:09:05.004865 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 04:09:05.004878 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-06 04:09:05.006660 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-06 04:09:05.015622 | orchestrator | 2026-04-06 04:09:05.015710 | orchestrator | # Ceph status 2026-04-06 04:09:05.015728 | orchestrator | 2026-04-06 04:09:05.015757 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 04:09:05.015775 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 04:09:05.015792 | orchestrator | + echo 2026-04-06 04:09:05.015809 | orchestrator | + echo '# Ceph status' 2026-04-06 04:09:05.015825 | orchestrator | + echo 2026-04-06 04:09:05.015842 | orchestrator | + ceph -s 2026-04-06 04:09:05.653291 | orchestrator | cluster: 2026-04-06 04:09:05.653393 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-06 04:09:05.653408 | orchestrator | health: HEALTH_OK 2026-04-06 04:09:05.653421 | orchestrator | 2026-04-06 04:09:05.653433 | orchestrator | services: 2026-04-06 04:09:05.653445 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 71m) 2026-04-06 04:09:05.653458 | orchestrator | mgr: testbed-node-2(active, since 59m), standbys: testbed-node-1, testbed-node-0 2026-04-06 04:09:05.653470 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-06 04:09:05.653481 | orchestrator | osd: 6 osds: 6 up (since 68m), 6 in (since 68m) 2026-04-06 04:09:05.653492 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-06 04:09:05.653504 | orchestrator | 2026-04-06 04:09:05.653515 | orchestrator | data: 2026-04-06 04:09:05.653526 | orchestrator | volumes: 1/1 healthy 2026-04-06 04:09:05.653538 | orchestrator | pools: 14 pools, 401 pgs 2026-04-06 04:09:05.653549 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-06 04:09:05.653560 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-06 04:09:05.653571 | orchestrator | pgs: 401 active+clean 2026-04-06 04:09:05.653583 | orchestrator | 2026-04-06 04:09:05.710553 | orchestrator | 2026-04-06 04:09:05.710657 | orchestrator | # Ceph versions 2026-04-06 04:09:05.710665 | orchestrator | 2026-04-06 04:09:05.710670 | orchestrator | + echo 2026-04-06 04:09:05.710674 | orchestrator | + echo '# Ceph versions' 2026-04-06 04:09:05.710679 | orchestrator | + echo 2026-04-06 04:09:05.710683 | orchestrator | + ceph versions 2026-04-06 04:09:06.339464 | orchestrator | { 2026-04-06 04:09:06.339564 | orchestrator | "mon": { 2026-04-06 04:09:06.339580 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-06 04:09:06.339635 | orchestrator | }, 2026-04-06 04:09:06.339648 | orchestrator | "mgr": { 2026-04-06 04:09:06.339660 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-06 04:09:06.339671 | orchestrator | }, 2026-04-06 04:09:06.339682 | orchestrator | "osd": { 2026-04-06 04:09:06.339693 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-06 04:09:06.339704 | orchestrator | }, 2026-04-06 04:09:06.339715 | orchestrator | "mds": { 2026-04-06 04:09:06.339726 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-06 04:09:06.339737 | orchestrator | }, 2026-04-06 04:09:06.339749 | orchestrator | "rgw": { 2026-04-06 04:09:06.339760 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-06 04:09:06.339771 | orchestrator | }, 2026-04-06 04:09:06.339782 | orchestrator | "overall": { 2026-04-06 04:09:06.339817 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-06 04:09:06.339829 | orchestrator | } 2026-04-06 04:09:06.339840 | orchestrator | } 2026-04-06 04:09:06.389449 | orchestrator | 2026-04-06 04:09:06.389541 | orchestrator | # Ceph OSD tree 2026-04-06 04:09:06.389556 | orchestrator | 2026-04-06 04:09:06.389568 | orchestrator | + echo 2026-04-06 04:09:06.389580 | orchestrator | + echo '# Ceph OSD tree' 2026-04-06 04:09:06.389635 | orchestrator | + echo 2026-04-06 04:09:06.389648 | orchestrator | + ceph osd df tree 2026-04-06 04:09:07.001707 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-06 04:09:07.001802 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 394 MiB 113 GiB 5.89 1.00 - root default 2026-04-06 04:09:07.001818 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-04-06 04:09:07.001826 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 78 MiB 19 GiB 5.79 0.98 209 up osd.1 2026-04-06 04:09:07.001833 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 6.04 1.03 181 up osd.3 2026-04-06 04:09:07.001865 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-4 2026-04-06 04:09:07.001887 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 66 MiB 19 GiB 5.34 0.91 190 up osd.0 2026-04-06 04:09:07.001894 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.42 1.09 202 up osd.4 2026-04-06 04:09:07.001900 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-5 2026-04-06 04:09:07.001908 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.22 1.06 191 up osd.2 2026-04-06 04:09:07.001915 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 62 MiB 19 GiB 5.52 0.94 197 up osd.5 2026-04-06 04:09:07.001922 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 394 MiB 113 GiB 5.89 2026-04-06 04:09:07.001929 | orchestrator | MIN/MAX VAR: 0.91/1.09 STDDEV: 0.38 2026-04-06 04:09:07.063061 | orchestrator | 2026-04-06 04:09:07.063129 | orchestrator | # Ceph monitor status 2026-04-06 04:09:07.063138 | orchestrator | 2026-04-06 04:09:07.063146 | orchestrator | + echo 2026-04-06 04:09:07.063153 | orchestrator | + echo '# Ceph monitor status' 2026-04-06 04:09:07.063160 | orchestrator | + echo 2026-04-06 04:09:07.063167 | orchestrator | + ceph mon stat 2026-04-06 04:09:07.700823 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-06 04:09:07.754506 | orchestrator | 2026-04-06 04:09:07.754678 | orchestrator | # Ceph quorum status 2026-04-06 04:09:07.754699 | orchestrator | 2026-04-06 04:09:07.754712 | orchestrator | + echo 2026-04-06 04:09:07.754724 | orchestrator | + echo '# Ceph quorum status' 2026-04-06 04:09:07.754735 | orchestrator | + echo 2026-04-06 04:09:07.755284 | orchestrator | + ceph quorum_status 2026-04-06 04:09:07.755686 | orchestrator | + jq 2026-04-06 04:09:08.422098 | orchestrator | { 2026-04-06 04:09:08.422207 | orchestrator | "election_epoch": 4, 2026-04-06 04:09:08.422224 | orchestrator | "quorum": [ 2026-04-06 04:09:08.422236 | orchestrator | 0, 2026-04-06 04:09:08.422248 | orchestrator | 1, 2026-04-06 04:09:08.422259 | orchestrator | 2 2026-04-06 04:09:08.422270 | orchestrator | ], 2026-04-06 04:09:08.422282 | orchestrator | "quorum_names": [ 2026-04-06 04:09:08.422375 | orchestrator | "testbed-node-0", 2026-04-06 04:09:08.422398 | orchestrator | "testbed-node-1", 2026-04-06 04:09:08.422418 | orchestrator | "testbed-node-2" 2026-04-06 04:09:08.422438 | orchestrator | ], 2026-04-06 04:09:08.422457 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-06 04:09:08.422477 | orchestrator | "quorum_age": 4308, 2026-04-06 04:09:08.422498 | orchestrator | "features": { 2026-04-06 04:09:08.422518 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-06 04:09:08.422540 | orchestrator | "quorum_mon": [ 2026-04-06 04:09:08.422560 | orchestrator | "kraken", 2026-04-06 04:09:08.422580 | orchestrator | "luminous", 2026-04-06 04:09:08.422627 | orchestrator | "mimic", 2026-04-06 04:09:08.422648 | orchestrator | "osdmap-prune", 2026-04-06 04:09:08.422666 | orchestrator | "nautilus", 2026-04-06 04:09:08.422686 | orchestrator | "octopus", 2026-04-06 04:09:08.422706 | orchestrator | "pacific", 2026-04-06 04:09:08.422725 | orchestrator | "elector-pinging", 2026-04-06 04:09:08.422745 | orchestrator | "quincy", 2026-04-06 04:09:08.422764 | orchestrator | "reef" 2026-04-06 04:09:08.422784 | orchestrator | ] 2026-04-06 04:09:08.422804 | orchestrator | }, 2026-04-06 04:09:08.422823 | orchestrator | "monmap": { 2026-04-06 04:09:08.422842 | orchestrator | "epoch": 1, 2026-04-06 04:09:08.422856 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-06 04:09:08.422871 | orchestrator | "modified": "2026-04-06T02:57:07.260924Z", 2026-04-06 04:09:08.422886 | orchestrator | "created": "2026-04-06T02:57:07.260924Z", 2026-04-06 04:09:08.422899 | orchestrator | "min_mon_release": 18, 2026-04-06 04:09:08.422912 | orchestrator | "min_mon_release_name": "reef", 2026-04-06 04:09:08.422927 | orchestrator | "election_strategy": 1, 2026-04-06 04:09:08.422940 | orchestrator | "disallowed_leaders: ": "", 2026-04-06 04:09:08.422951 | orchestrator | "stretch_mode": false, 2026-04-06 04:09:08.422989 | orchestrator | "tiebreaker_mon": "", 2026-04-06 04:09:08.423001 | orchestrator | "removed_ranks: ": "", 2026-04-06 04:09:08.423012 | orchestrator | "features": { 2026-04-06 04:09:08.423022 | orchestrator | "persistent": [ 2026-04-06 04:09:08.423033 | orchestrator | "kraken", 2026-04-06 04:09:08.423044 | orchestrator | "luminous", 2026-04-06 04:09:08.423054 | orchestrator | "mimic", 2026-04-06 04:09:08.423065 | orchestrator | "osdmap-prune", 2026-04-06 04:09:08.423076 | orchestrator | "nautilus", 2026-04-06 04:09:08.423086 | orchestrator | "octopus", 2026-04-06 04:09:08.423097 | orchestrator | "pacific", 2026-04-06 04:09:08.423108 | orchestrator | "elector-pinging", 2026-04-06 04:09:08.423126 | orchestrator | "quincy", 2026-04-06 04:09:08.423144 | orchestrator | "reef" 2026-04-06 04:09:08.423162 | orchestrator | ], 2026-04-06 04:09:08.423180 | orchestrator | "optional": [] 2026-04-06 04:09:08.423195 | orchestrator | }, 2026-04-06 04:09:08.423212 | orchestrator | "mons": [ 2026-04-06 04:09:08.423232 | orchestrator | { 2026-04-06 04:09:08.423250 | orchestrator | "rank": 0, 2026-04-06 04:09:08.423269 | orchestrator | "name": "testbed-node-0", 2026-04-06 04:09:08.423287 | orchestrator | "public_addrs": { 2026-04-06 04:09:08.423304 | orchestrator | "addrvec": [ 2026-04-06 04:09:08.423315 | orchestrator | { 2026-04-06 04:09:08.423326 | orchestrator | "type": "v2", 2026-04-06 04:09:08.423338 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-06 04:09:08.423349 | orchestrator | "nonce": 0 2026-04-06 04:09:08.423360 | orchestrator | }, 2026-04-06 04:09:08.423371 | orchestrator | { 2026-04-06 04:09:08.423382 | orchestrator | "type": "v1", 2026-04-06 04:09:08.423393 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-06 04:09:08.423403 | orchestrator | "nonce": 0 2026-04-06 04:09:08.423414 | orchestrator | } 2026-04-06 04:09:08.423425 | orchestrator | ] 2026-04-06 04:09:08.423436 | orchestrator | }, 2026-04-06 04:09:08.423447 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-06 04:09:08.423458 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-06 04:09:08.423468 | orchestrator | "priority": 0, 2026-04-06 04:09:08.423479 | orchestrator | "weight": 0, 2026-04-06 04:09:08.423490 | orchestrator | "crush_location": "{}" 2026-04-06 04:09:08.423501 | orchestrator | }, 2026-04-06 04:09:08.423512 | orchestrator | { 2026-04-06 04:09:08.423523 | orchestrator | "rank": 1, 2026-04-06 04:09:08.423534 | orchestrator | "name": "testbed-node-1", 2026-04-06 04:09:08.423545 | orchestrator | "public_addrs": { 2026-04-06 04:09:08.423556 | orchestrator | "addrvec": [ 2026-04-06 04:09:08.423567 | orchestrator | { 2026-04-06 04:09:08.423577 | orchestrator | "type": "v2", 2026-04-06 04:09:08.423588 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-06 04:09:08.423628 | orchestrator | "nonce": 0 2026-04-06 04:09:08.423639 | orchestrator | }, 2026-04-06 04:09:08.423650 | orchestrator | { 2026-04-06 04:09:08.423662 | orchestrator | "type": "v1", 2026-04-06 04:09:08.423672 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-06 04:09:08.423683 | orchestrator | "nonce": 0 2026-04-06 04:09:08.423694 | orchestrator | } 2026-04-06 04:09:08.423705 | orchestrator | ] 2026-04-06 04:09:08.423716 | orchestrator | }, 2026-04-06 04:09:08.423727 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-06 04:09:08.423738 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-06 04:09:08.423751 | orchestrator | "priority": 0, 2026-04-06 04:09:08.423770 | orchestrator | "weight": 0, 2026-04-06 04:09:08.423789 | orchestrator | "crush_location": "{}" 2026-04-06 04:09:08.423806 | orchestrator | }, 2026-04-06 04:09:08.423825 | orchestrator | { 2026-04-06 04:09:08.423844 | orchestrator | "rank": 2, 2026-04-06 04:09:08.423863 | orchestrator | "name": "testbed-node-2", 2026-04-06 04:09:08.423883 | orchestrator | "public_addrs": { 2026-04-06 04:09:08.423903 | orchestrator | "addrvec": [ 2026-04-06 04:09:08.423922 | orchestrator | { 2026-04-06 04:09:08.423941 | orchestrator | "type": "v2", 2026-04-06 04:09:08.423959 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-06 04:09:08.423979 | orchestrator | "nonce": 0 2026-04-06 04:09:08.423998 | orchestrator | }, 2026-04-06 04:09:08.424015 | orchestrator | { 2026-04-06 04:09:08.424032 | orchestrator | "type": "v1", 2026-04-06 04:09:08.424043 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-06 04:09:08.424054 | orchestrator | "nonce": 0 2026-04-06 04:09:08.424076 | orchestrator | } 2026-04-06 04:09:08.424087 | orchestrator | ] 2026-04-06 04:09:08.424098 | orchestrator | }, 2026-04-06 04:09:08.424109 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-06 04:09:08.424121 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-06 04:09:08.424131 | orchestrator | "priority": 0, 2026-04-06 04:09:08.424157 | orchestrator | "weight": 0, 2026-04-06 04:09:08.424168 | orchestrator | "crush_location": "{}" 2026-04-06 04:09:08.424180 | orchestrator | } 2026-04-06 04:09:08.424190 | orchestrator | ] 2026-04-06 04:09:08.424201 | orchestrator | } 2026-04-06 04:09:08.424213 | orchestrator | } 2026-04-06 04:09:08.424445 | orchestrator | 2026-04-06 04:09:08.424477 | orchestrator | # Ceph free space status 2026-04-06 04:09:08.424497 | orchestrator | 2026-04-06 04:09:08.424516 | orchestrator | + echo 2026-04-06 04:09:08.424535 | orchestrator | + echo '# Ceph free space status' 2026-04-06 04:09:08.424554 | orchestrator | + echo 2026-04-06 04:09:08.424583 | orchestrator | + ceph df 2026-04-06 04:09:09.016390 | orchestrator | --- RAW STORAGE --- 2026-04-06 04:09:09.016459 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-06 04:09:09.016476 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-04-06 04:09:09.016481 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-04-06 04:09:09.016486 | orchestrator | 2026-04-06 04:09:09.016491 | orchestrator | --- POOLS --- 2026-04-06 04:09:09.016496 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-06 04:09:09.016502 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-06 04:09:09.016506 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-06 04:09:09.016511 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-06 04:09:09.016515 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-06 04:09:09.016520 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-06 04:09:09.016525 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-06 04:09:09.016529 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-06 04:09:09.016533 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-06 04:09:09.016538 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-04-06 04:09:09.016542 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-06 04:09:09.016546 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-06 04:09:09.016550 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.90 35 GiB 2026-04-06 04:09:09.016554 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-06 04:09:09.016559 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-06 04:09:09.077970 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-06 04:09:09.129886 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-06 04:09:09.129976 | orchestrator | + osism apply facts 2026-04-06 04:09:11.420849 | orchestrator | 2026-04-06 04:09:11 | INFO  | Task 560b362a-2866-4d10-b22c-94618a93b76c (facts) was prepared for execution. 2026-04-06 04:09:11.420925 | orchestrator | 2026-04-06 04:09:11 | INFO  | It takes a moment until task 560b362a-2866-4d10-b22c-94618a93b76c (facts) has been started and output is visible here. 2026-04-06 04:09:26.116905 | orchestrator | 2026-04-06 04:09:26.116995 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-06 04:09:26.117003 | orchestrator | 2026-04-06 04:09:26.117007 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-06 04:09:26.117012 | orchestrator | Monday 06 April 2026 04:09:16 +0000 (0:00:00.303) 0:00:00.303 ********** 2026-04-06 04:09:26.117017 | orchestrator | ok: [testbed-manager] 2026-04-06 04:09:26.117024 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:09:26.117030 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:09:26.117036 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:09:26.117042 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:09:26.117047 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:09:26.117074 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:09:26.117080 | orchestrator | 2026-04-06 04:09:26.117086 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-06 04:09:26.117092 | orchestrator | Monday 06 April 2026 04:09:17 +0000 (0:00:01.286) 0:00:01.590 ********** 2026-04-06 04:09:26.117098 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:09:26.117105 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:09:26.117111 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:09:26.117117 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:09:26.117123 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:09:26.117129 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:09:26.117134 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:09:26.117140 | orchestrator | 2026-04-06 04:09:26.117147 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-06 04:09:26.117153 | orchestrator | 2026-04-06 04:09:26.117159 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-06 04:09:26.117165 | orchestrator | Monday 06 April 2026 04:09:19 +0000 (0:00:01.507) 0:00:03.097 ********** 2026-04-06 04:09:26.117169 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:09:26.117173 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:09:26.117177 | orchestrator | ok: [testbed-manager] 2026-04-06 04:09:26.117181 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:09:26.117185 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:09:26.117188 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:09:26.117192 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:09:26.117196 | orchestrator | 2026-04-06 04:09:26.117200 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-06 04:09:26.117204 | orchestrator | 2026-04-06 04:09:26.117208 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-06 04:09:26.117212 | orchestrator | Monday 06 April 2026 04:09:24 +0000 (0:00:05.671) 0:00:08.769 ********** 2026-04-06 04:09:26.117216 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:09:26.117220 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:09:26.117223 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:09:26.117227 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:09:26.117231 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:09:26.117234 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:09:26.117238 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:09:26.117242 | orchestrator | 2026-04-06 04:09:26.117246 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:09:26.117250 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:09:26.117256 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:09:26.117260 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:09:26.117264 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:09:26.117268 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:09:26.117271 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:09:26.117275 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:09:26.117279 | orchestrator | 2026-04-06 04:09:26.117283 | orchestrator | 2026-04-06 04:09:26.117287 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:09:26.117290 | orchestrator | Monday 06 April 2026 04:09:25 +0000 (0:00:00.705) 0:00:09.474 ********** 2026-04-06 04:09:26.117299 | orchestrator | =============================================================================== 2026-04-06 04:09:26.117303 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.67s 2026-04-06 04:09:26.117307 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.51s 2026-04-06 04:09:26.117310 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-04-06 04:09:26.117314 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.71s 2026-04-06 04:09:26.502743 | orchestrator | + osism validate ceph-mons 2026-04-06 04:10:01.209889 | orchestrator | 2026-04-06 04:10:01.210003 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-06 04:10:01.210085 | orchestrator | 2026-04-06 04:10:01.210098 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-06 04:10:01.210160 | orchestrator | Monday 06 April 2026 04:09:44 +0000 (0:00:00.497) 0:00:00.497 ********** 2026-04-06 04:10:01.210176 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:01.210188 | orchestrator | 2026-04-06 04:10:01.210200 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-06 04:10:01.210211 | orchestrator | Monday 06 April 2026 04:09:45 +0000 (0:00:00.922) 0:00:01.419 ********** 2026-04-06 04:10:01.210223 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:01.210234 | orchestrator | 2026-04-06 04:10:01.210247 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-06 04:10:01.210266 | orchestrator | Monday 06 April 2026 04:09:46 +0000 (0:00:01.100) 0:00:02.520 ********** 2026-04-06 04:10:01.210284 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.210304 | orchestrator | 2026-04-06 04:10:01.210324 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-06 04:10:01.210344 | orchestrator | Monday 06 April 2026 04:09:46 +0000 (0:00:00.133) 0:00:02.653 ********** 2026-04-06 04:10:01.210364 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.210383 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:10:01.210397 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:10:01.210410 | orchestrator | 2026-04-06 04:10:01.210423 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-06 04:10:01.210436 | orchestrator | Monday 06 April 2026 04:09:46 +0000 (0:00:00.337) 0:00:02.991 ********** 2026-04-06 04:10:01.210449 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:10:01.210462 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.210474 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:10:01.210487 | orchestrator | 2026-04-06 04:10:01.210501 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-06 04:10:01.210514 | orchestrator | Monday 06 April 2026 04:09:47 +0000 (0:00:01.046) 0:00:04.037 ********** 2026-04-06 04:10:01.210528 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.210541 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:10:01.210554 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:10:01.210568 | orchestrator | 2026-04-06 04:10:01.210581 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-06 04:10:01.210593 | orchestrator | Monday 06 April 2026 04:09:48 +0000 (0:00:00.327) 0:00:04.365 ********** 2026-04-06 04:10:01.210677 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.210690 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:10:01.210703 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:10:01.210716 | orchestrator | 2026-04-06 04:10:01.210730 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 04:10:01.210744 | orchestrator | Monday 06 April 2026 04:09:48 +0000 (0:00:00.538) 0:00:04.903 ********** 2026-04-06 04:10:01.210757 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.210770 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:10:01.210783 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:10:01.210796 | orchestrator | 2026-04-06 04:10:01.210809 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-06 04:10:01.210844 | orchestrator | Monday 06 April 2026 04:09:48 +0000 (0:00:00.344) 0:00:05.248 ********** 2026-04-06 04:10:01.210856 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.210867 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:10:01.210878 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:10:01.210889 | orchestrator | 2026-04-06 04:10:01.210899 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-06 04:10:01.210911 | orchestrator | Monday 06 April 2026 04:09:49 +0000 (0:00:00.317) 0:00:05.565 ********** 2026-04-06 04:10:01.210922 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.210932 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:10:01.210943 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:10:01.210954 | orchestrator | 2026-04-06 04:10:01.210972 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 04:10:01.210983 | orchestrator | Monday 06 April 2026 04:09:49 +0000 (0:00:00.531) 0:00:06.096 ********** 2026-04-06 04:10:01.210994 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.211005 | orchestrator | 2026-04-06 04:10:01.211016 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 04:10:01.211027 | orchestrator | Monday 06 April 2026 04:09:50 +0000 (0:00:00.256) 0:00:06.353 ********** 2026-04-06 04:10:01.211038 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.211049 | orchestrator | 2026-04-06 04:10:01.211060 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 04:10:01.211071 | orchestrator | Monday 06 April 2026 04:09:50 +0000 (0:00:00.274) 0:00:06.627 ********** 2026-04-06 04:10:01.211082 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.211093 | orchestrator | 2026-04-06 04:10:01.211104 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:01.211115 | orchestrator | Monday 06 April 2026 04:09:50 +0000 (0:00:00.267) 0:00:06.895 ********** 2026-04-06 04:10:01.211127 | orchestrator | 2026-04-06 04:10:01.211138 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:01.211148 | orchestrator | Monday 06 April 2026 04:09:50 +0000 (0:00:00.078) 0:00:06.973 ********** 2026-04-06 04:10:01.211159 | orchestrator | 2026-04-06 04:10:01.211170 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:01.211181 | orchestrator | Monday 06 April 2026 04:09:50 +0000 (0:00:00.074) 0:00:07.048 ********** 2026-04-06 04:10:01.211192 | orchestrator | 2026-04-06 04:10:01.211203 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 04:10:01.211214 | orchestrator | Monday 06 April 2026 04:09:50 +0000 (0:00:00.083) 0:00:07.131 ********** 2026-04-06 04:10:01.211225 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.211236 | orchestrator | 2026-04-06 04:10:01.211247 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-06 04:10:01.211258 | orchestrator | Monday 06 April 2026 04:09:51 +0000 (0:00:00.297) 0:00:07.429 ********** 2026-04-06 04:10:01.211269 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.211280 | orchestrator | 2026-04-06 04:10:01.211316 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-06 04:10:01.211337 | orchestrator | Monday 06 April 2026 04:09:51 +0000 (0:00:00.297) 0:00:07.726 ********** 2026-04-06 04:10:01.211358 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.211379 | orchestrator | 2026-04-06 04:10:01.211400 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-06 04:10:01.211420 | orchestrator | Monday 06 April 2026 04:09:51 +0000 (0:00:00.146) 0:00:07.873 ********** 2026-04-06 04:10:01.211439 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:10:01.211451 | orchestrator | 2026-04-06 04:10:01.211465 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-06 04:10:01.211476 | orchestrator | Monday 06 April 2026 04:09:53 +0000 (0:00:01.664) 0:00:09.538 ********** 2026-04-06 04:10:01.211487 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.211498 | orchestrator | 2026-04-06 04:10:01.211519 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-06 04:10:01.211531 | orchestrator | Monday 06 April 2026 04:09:53 +0000 (0:00:00.579) 0:00:10.117 ********** 2026-04-06 04:10:01.211541 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.211552 | orchestrator | 2026-04-06 04:10:01.211563 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-06 04:10:01.211574 | orchestrator | Monday 06 April 2026 04:09:53 +0000 (0:00:00.128) 0:00:10.245 ********** 2026-04-06 04:10:01.211585 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.211620 | orchestrator | 2026-04-06 04:10:01.211641 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-06 04:10:01.211655 | orchestrator | Monday 06 April 2026 04:09:54 +0000 (0:00:00.365) 0:00:10.611 ********** 2026-04-06 04:10:01.211666 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.211677 | orchestrator | 2026-04-06 04:10:01.211688 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-06 04:10:01.211699 | orchestrator | Monday 06 April 2026 04:09:54 +0000 (0:00:00.338) 0:00:10.950 ********** 2026-04-06 04:10:01.211709 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.211720 | orchestrator | 2026-04-06 04:10:01.211731 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-06 04:10:01.211742 | orchestrator | Monday 06 April 2026 04:09:54 +0000 (0:00:00.126) 0:00:11.077 ********** 2026-04-06 04:10:01.211752 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.211763 | orchestrator | 2026-04-06 04:10:01.211774 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-06 04:10:01.211785 | orchestrator | Monday 06 April 2026 04:09:54 +0000 (0:00:00.136) 0:00:11.213 ********** 2026-04-06 04:10:01.211795 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.211806 | orchestrator | 2026-04-06 04:10:01.211817 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-06 04:10:01.211828 | orchestrator | Monday 06 April 2026 04:09:55 +0000 (0:00:00.144) 0:00:11.357 ********** 2026-04-06 04:10:01.211839 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:10:01.211849 | orchestrator | 2026-04-06 04:10:01.211860 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-06 04:10:01.211871 | orchestrator | Monday 06 April 2026 04:09:56 +0000 (0:00:01.402) 0:00:12.760 ********** 2026-04-06 04:10:01.211882 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.211892 | orchestrator | 2026-04-06 04:10:01.211903 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-06 04:10:01.211914 | orchestrator | Monday 06 April 2026 04:09:56 +0000 (0:00:00.347) 0:00:13.108 ********** 2026-04-06 04:10:01.211925 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.211936 | orchestrator | 2026-04-06 04:10:01.211946 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-06 04:10:01.211957 | orchestrator | Monday 06 April 2026 04:09:56 +0000 (0:00:00.144) 0:00:13.252 ********** 2026-04-06 04:10:01.211968 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:01.211979 | orchestrator | 2026-04-06 04:10:01.211989 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-06 04:10:01.212007 | orchestrator | Monday 06 April 2026 04:09:57 +0000 (0:00:00.152) 0:00:13.405 ********** 2026-04-06 04:10:01.212018 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.212029 | orchestrator | 2026-04-06 04:10:01.212040 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-06 04:10:01.212051 | orchestrator | Monday 06 April 2026 04:09:57 +0000 (0:00:00.145) 0:00:13.550 ********** 2026-04-06 04:10:01.212062 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.212073 | orchestrator | 2026-04-06 04:10:01.212084 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-06 04:10:01.212095 | orchestrator | Monday 06 April 2026 04:09:57 +0000 (0:00:00.379) 0:00:13.929 ********** 2026-04-06 04:10:01.212105 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:01.212124 | orchestrator | 2026-04-06 04:10:01.212135 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-06 04:10:01.212146 | orchestrator | Monday 06 April 2026 04:09:57 +0000 (0:00:00.296) 0:00:14.226 ********** 2026-04-06 04:10:01.212157 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:01.212168 | orchestrator | 2026-04-06 04:10:01.212179 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 04:10:01.212189 | orchestrator | Monday 06 April 2026 04:09:58 +0000 (0:00:00.384) 0:00:14.611 ********** 2026-04-06 04:10:01.212200 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:01.212211 | orchestrator | 2026-04-06 04:10:01.212222 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 04:10:01.212233 | orchestrator | Monday 06 April 2026 04:10:00 +0000 (0:00:02.070) 0:00:16.681 ********** 2026-04-06 04:10:01.212244 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:01.212255 | orchestrator | 2026-04-06 04:10:01.212266 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 04:10:01.212276 | orchestrator | Monday 06 April 2026 04:10:00 +0000 (0:00:00.304) 0:00:16.985 ********** 2026-04-06 04:10:01.212287 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:01.212298 | orchestrator | 2026-04-06 04:10:01.212318 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:04.198272 | orchestrator | Monday 06 April 2026 04:10:00 +0000 (0:00:00.281) 0:00:17.267 ********** 2026-04-06 04:10:04.198366 | orchestrator | 2026-04-06 04:10:04.198373 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:04.198378 | orchestrator | Monday 06 April 2026 04:10:01 +0000 (0:00:00.075) 0:00:17.343 ********** 2026-04-06 04:10:04.198383 | orchestrator | 2026-04-06 04:10:04.198387 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:04.198392 | orchestrator | Monday 06 April 2026 04:10:01 +0000 (0:00:00.074) 0:00:17.418 ********** 2026-04-06 04:10:04.198396 | orchestrator | 2026-04-06 04:10:04.198400 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-06 04:10:04.198404 | orchestrator | Monday 06 April 2026 04:10:01 +0000 (0:00:00.075) 0:00:17.493 ********** 2026-04-06 04:10:04.198408 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:04.198421 | orchestrator | 2026-04-06 04:10:04.198474 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 04:10:04.198483 | orchestrator | Monday 06 April 2026 04:10:02 +0000 (0:00:01.675) 0:00:19.169 ********** 2026-04-06 04:10:04.198490 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-06 04:10:04.198496 | orchestrator |  "msg": [ 2026-04-06 04:10:04.198503 | orchestrator |  "Validator run completed.", 2026-04-06 04:10:04.198510 | orchestrator |  "You can find the report file here:", 2026-04-06 04:10:04.198516 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-06T04:09:44+00:00-report.json", 2026-04-06 04:10:04.198523 | orchestrator |  "on the following host:", 2026-04-06 04:10:04.198529 | orchestrator |  "testbed-manager" 2026-04-06 04:10:04.198535 | orchestrator |  ] 2026-04-06 04:10:04.198542 | orchestrator | } 2026-04-06 04:10:04.198549 | orchestrator | 2026-04-06 04:10:04.198554 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:10:04.198559 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-06 04:10:04.198565 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:10:04.198569 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:10:04.198574 | orchestrator | 2026-04-06 04:10:04.198595 | orchestrator | 2026-04-06 04:10:04.198642 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:10:04.198646 | orchestrator | Monday 06 April 2026 04:10:03 +0000 (0:00:00.929) 0:00:20.099 ********** 2026-04-06 04:10:04.198650 | orchestrator | =============================================================================== 2026-04-06 04:10:04.198654 | orchestrator | Aggregate test results step one ----------------------------------------- 2.07s 2026-04-06 04:10:04.198658 | orchestrator | Write report file ------------------------------------------------------- 1.68s 2026-04-06 04:10:04.198662 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.66s 2026-04-06 04:10:04.198666 | orchestrator | Gather status data ------------------------------------------------------ 1.40s 2026-04-06 04:10:04.198670 | orchestrator | Create report output directory ------------------------------------------ 1.10s 2026-04-06 04:10:04.198673 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2026-04-06 04:10:04.198677 | orchestrator | Print report file information ------------------------------------------- 0.93s 2026-04-06 04:10:04.198681 | orchestrator | Get timestamp for report file ------------------------------------------- 0.92s 2026-04-06 04:10:04.198685 | orchestrator | Set quorum test data ---------------------------------------------------- 0.58s 2026-04-06 04:10:04.198689 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-04-06 04:10:04.198693 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.53s 2026-04-06 04:10:04.198697 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.38s 2026-04-06 04:10:04.198701 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.38s 2026-04-06 04:10:04.198705 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.37s 2026-04-06 04:10:04.198709 | orchestrator | Set health test data ---------------------------------------------------- 0.35s 2026-04-06 04:10:04.198713 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-04-06 04:10:04.198717 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2026-04-06 04:10:04.198721 | orchestrator | Prepare test data for container existance test -------------------------- 0.34s 2026-04-06 04:10:04.198724 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-04-06 04:10:04.198728 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.32s 2026-04-06 04:10:04.588164 | orchestrator | + osism validate ceph-mgrs 2026-04-06 04:10:37.660589 | orchestrator | 2026-04-06 04:10:37.660753 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-06 04:10:37.660768 | orchestrator | 2026-04-06 04:10:37.660776 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-06 04:10:37.660784 | orchestrator | Monday 06 April 2026 04:10:22 +0000 (0:00:00.508) 0:00:00.509 ********** 2026-04-06 04:10:37.660792 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:37.660799 | orchestrator | 2026-04-06 04:10:37.660806 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-06 04:10:37.660813 | orchestrator | Monday 06 April 2026 04:10:22 +0000 (0:00:00.950) 0:00:01.459 ********** 2026-04-06 04:10:37.660836 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:37.660844 | orchestrator | 2026-04-06 04:10:37.660851 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-06 04:10:37.660857 | orchestrator | Monday 06 April 2026 04:10:24 +0000 (0:00:01.076) 0:00:02.535 ********** 2026-04-06 04:10:37.660865 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:37.660872 | orchestrator | 2026-04-06 04:10:37.660879 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-06 04:10:37.660886 | orchestrator | Monday 06 April 2026 04:10:24 +0000 (0:00:00.137) 0:00:02.672 ********** 2026-04-06 04:10:37.660893 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:37.660900 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:10:37.660926 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:10:37.660933 | orchestrator | 2026-04-06 04:10:37.660940 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-06 04:10:37.660947 | orchestrator | Monday 06 April 2026 04:10:24 +0000 (0:00:00.321) 0:00:02.994 ********** 2026-04-06 04:10:37.660954 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:10:37.660961 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:10:37.660967 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:37.660974 | orchestrator | 2026-04-06 04:10:37.660992 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-06 04:10:37.660999 | orchestrator | Monday 06 April 2026 04:10:25 +0000 (0:00:01.037) 0:00:04.032 ********** 2026-04-06 04:10:37.661014 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:37.661021 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:10:37.661028 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:10:37.661035 | orchestrator | 2026-04-06 04:10:37.661042 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-06 04:10:37.661049 | orchestrator | Monday 06 April 2026 04:10:25 +0000 (0:00:00.315) 0:00:04.347 ********** 2026-04-06 04:10:37.661055 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:37.661062 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:10:37.661074 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:10:37.661085 | orchestrator | 2026-04-06 04:10:37.661097 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 04:10:37.661109 | orchestrator | Monday 06 April 2026 04:10:26 +0000 (0:00:00.558) 0:00:04.906 ********** 2026-04-06 04:10:37.661121 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:37.661132 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:10:37.661143 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:10:37.661155 | orchestrator | 2026-04-06 04:10:37.661166 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-06 04:10:37.661177 | orchestrator | Monday 06 April 2026 04:10:26 +0000 (0:00:00.313) 0:00:05.220 ********** 2026-04-06 04:10:37.661187 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:37.661197 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:10:37.661209 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:10:37.661222 | orchestrator | 2026-04-06 04:10:37.661234 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-06 04:10:37.661246 | orchestrator | Monday 06 April 2026 04:10:27 +0000 (0:00:00.310) 0:00:05.531 ********** 2026-04-06 04:10:37.661258 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:37.661270 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:10:37.661282 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:10:37.661294 | orchestrator | 2026-04-06 04:10:37.661301 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 04:10:37.661309 | orchestrator | Monday 06 April 2026 04:10:27 +0000 (0:00:00.540) 0:00:06.071 ********** 2026-04-06 04:10:37.661316 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:37.661322 | orchestrator | 2026-04-06 04:10:37.661329 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 04:10:37.661336 | orchestrator | Monday 06 April 2026 04:10:27 +0000 (0:00:00.306) 0:00:06.378 ********** 2026-04-06 04:10:37.661343 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:37.661350 | orchestrator | 2026-04-06 04:10:37.661357 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 04:10:37.661369 | orchestrator | Monday 06 April 2026 04:10:28 +0000 (0:00:00.297) 0:00:06.675 ********** 2026-04-06 04:10:37.661376 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:37.661383 | orchestrator | 2026-04-06 04:10:37.661390 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:37.661396 | orchestrator | Monday 06 April 2026 04:10:28 +0000 (0:00:00.274) 0:00:06.950 ********** 2026-04-06 04:10:37.661403 | orchestrator | 2026-04-06 04:10:37.661410 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:37.661416 | orchestrator | Monday 06 April 2026 04:10:28 +0000 (0:00:00.075) 0:00:07.025 ********** 2026-04-06 04:10:37.661431 | orchestrator | 2026-04-06 04:10:37.661437 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:37.661444 | orchestrator | Monday 06 April 2026 04:10:28 +0000 (0:00:00.075) 0:00:07.100 ********** 2026-04-06 04:10:37.661451 | orchestrator | 2026-04-06 04:10:37.661457 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 04:10:37.661464 | orchestrator | Monday 06 April 2026 04:10:28 +0000 (0:00:00.097) 0:00:07.198 ********** 2026-04-06 04:10:37.661471 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:37.661478 | orchestrator | 2026-04-06 04:10:37.661485 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-06 04:10:37.661491 | orchestrator | Monday 06 April 2026 04:10:29 +0000 (0:00:00.313) 0:00:07.511 ********** 2026-04-06 04:10:37.661498 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:37.661505 | orchestrator | 2026-04-06 04:10:37.661527 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-06 04:10:37.661535 | orchestrator | Monday 06 April 2026 04:10:29 +0000 (0:00:00.281) 0:00:07.792 ********** 2026-04-06 04:10:37.661541 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:37.661548 | orchestrator | 2026-04-06 04:10:37.661555 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-06 04:10:37.661562 | orchestrator | Monday 06 April 2026 04:10:29 +0000 (0:00:00.168) 0:00:07.961 ********** 2026-04-06 04:10:37.661568 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:10:37.661575 | orchestrator | 2026-04-06 04:10:37.661582 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-06 04:10:37.661589 | orchestrator | Monday 06 April 2026 04:10:31 +0000 (0:00:01.985) 0:00:09.946 ********** 2026-04-06 04:10:37.661595 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:37.661624 | orchestrator | 2026-04-06 04:10:37.661636 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-06 04:10:37.661648 | orchestrator | Monday 06 April 2026 04:10:31 +0000 (0:00:00.491) 0:00:10.438 ********** 2026-04-06 04:10:37.661656 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:37.661662 | orchestrator | 2026-04-06 04:10:37.661669 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-06 04:10:37.661676 | orchestrator | Monday 06 April 2026 04:10:32 +0000 (0:00:00.359) 0:00:10.797 ********** 2026-04-06 04:10:37.661682 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:37.661689 | orchestrator | 2026-04-06 04:10:37.661695 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-06 04:10:37.661702 | orchestrator | Monday 06 April 2026 04:10:32 +0000 (0:00:00.132) 0:00:10.930 ********** 2026-04-06 04:10:37.661709 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:10:37.661715 | orchestrator | 2026-04-06 04:10:37.661722 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-06 04:10:37.661729 | orchestrator | Monday 06 April 2026 04:10:32 +0000 (0:00:00.166) 0:00:11.096 ********** 2026-04-06 04:10:37.661735 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:37.661742 | orchestrator | 2026-04-06 04:10:37.661749 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-06 04:10:37.661755 | orchestrator | Monday 06 April 2026 04:10:32 +0000 (0:00:00.274) 0:00:11.370 ********** 2026-04-06 04:10:37.661762 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:10:37.661769 | orchestrator | 2026-04-06 04:10:37.661775 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 04:10:37.661782 | orchestrator | Monday 06 April 2026 04:10:33 +0000 (0:00:00.275) 0:00:11.646 ********** 2026-04-06 04:10:37.661789 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:37.661795 | orchestrator | 2026-04-06 04:10:37.661802 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 04:10:37.661809 | orchestrator | Monday 06 April 2026 04:10:34 +0000 (0:00:01.404) 0:00:13.050 ********** 2026-04-06 04:10:37.661815 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:37.661827 | orchestrator | 2026-04-06 04:10:37.661834 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 04:10:37.661841 | orchestrator | Monday 06 April 2026 04:10:34 +0000 (0:00:00.286) 0:00:13.337 ********** 2026-04-06 04:10:37.661848 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:37.661854 | orchestrator | 2026-04-06 04:10:37.661861 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:37.661867 | orchestrator | Monday 06 April 2026 04:10:35 +0000 (0:00:00.281) 0:00:13.619 ********** 2026-04-06 04:10:37.661874 | orchestrator | 2026-04-06 04:10:37.661881 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:37.661888 | orchestrator | Monday 06 April 2026 04:10:35 +0000 (0:00:00.093) 0:00:13.712 ********** 2026-04-06 04:10:37.661894 | orchestrator | 2026-04-06 04:10:37.661901 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:10:37.661907 | orchestrator | Monday 06 April 2026 04:10:35 +0000 (0:00:00.103) 0:00:13.815 ********** 2026-04-06 04:10:37.661914 | orchestrator | 2026-04-06 04:10:37.661921 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-06 04:10:37.661927 | orchestrator | Monday 06 April 2026 04:10:35 +0000 (0:00:00.307) 0:00:14.123 ********** 2026-04-06 04:10:37.661934 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 04:10:37.661941 | orchestrator | 2026-04-06 04:10:37.661952 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 04:10:37.661959 | orchestrator | Monday 06 April 2026 04:10:37 +0000 (0:00:01.523) 0:00:15.647 ********** 2026-04-06 04:10:37.661965 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-06 04:10:37.661972 | orchestrator |  "msg": [ 2026-04-06 04:10:37.661979 | orchestrator |  "Validator run completed.", 2026-04-06 04:10:37.661986 | orchestrator |  "You can find the report file here:", 2026-04-06 04:10:37.661993 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-06T04:10:22+00:00-report.json", 2026-04-06 04:10:37.662001 | orchestrator |  "on the following host:", 2026-04-06 04:10:37.662007 | orchestrator |  "testbed-manager" 2026-04-06 04:10:37.662014 | orchestrator |  ] 2026-04-06 04:10:37.662099 | orchestrator | } 2026-04-06 04:10:37.662107 | orchestrator | 2026-04-06 04:10:37.662114 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:10:37.662121 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-06 04:10:37.662130 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:10:37.662144 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:10:38.097014 | orchestrator | 2026-04-06 04:10:38.097100 | orchestrator | 2026-04-06 04:10:38.097111 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:10:38.097121 | orchestrator | Monday 06 April 2026 04:10:37 +0000 (0:00:00.478) 0:00:16.125 ********** 2026-04-06 04:10:38.097129 | orchestrator | =============================================================================== 2026-04-06 04:10:38.097137 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.99s 2026-04-06 04:10:38.097144 | orchestrator | Write report file ------------------------------------------------------- 1.52s 2026-04-06 04:10:38.097152 | orchestrator | Aggregate test results step one ----------------------------------------- 1.40s 2026-04-06 04:10:38.097159 | orchestrator | Create report output directory ------------------------------------------ 1.08s 2026-04-06 04:10:38.097166 | orchestrator | Get container info ------------------------------------------------------ 1.04s 2026-04-06 04:10:38.097174 | orchestrator | Get timestamp for report file ------------------------------------------- 0.95s 2026-04-06 04:10:38.097203 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2026-04-06 04:10:38.097211 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.54s 2026-04-06 04:10:38.097219 | orchestrator | Flush handlers ---------------------------------------------------------- 0.50s 2026-04-06 04:10:38.097226 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.49s 2026-04-06 04:10:38.097234 | orchestrator | Print report file information ------------------------------------------- 0.48s 2026-04-06 04:10:38.097241 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.36s 2026-04-06 04:10:38.097248 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-04-06 04:10:38.097256 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-04-06 04:10:38.097263 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-04-06 04:10:38.097270 | orchestrator | Print report file information ------------------------------------------- 0.31s 2026-04-06 04:10:38.097277 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2026-04-06 04:10:38.097285 | orchestrator | Aggregate test results step one ----------------------------------------- 0.31s 2026-04-06 04:10:38.097292 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-04-06 04:10:38.097299 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-04-06 04:10:38.479271 | orchestrator | + osism validate ceph-osds 2026-04-06 04:11:01.451512 | orchestrator | 2026-04-06 04:11:01.451661 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-06 04:11:01.451697 | orchestrator | 2026-04-06 04:11:01.451713 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-06 04:11:01.451727 | orchestrator | Monday 06 April 2026 04:10:56 +0000 (0:00:00.518) 0:00:00.518 ********** 2026-04-06 04:11:01.451737 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 04:11:01.451745 | orchestrator | 2026-04-06 04:11:01.451754 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-06 04:11:01.451762 | orchestrator | Monday 06 April 2026 04:10:57 +0000 (0:00:00.921) 0:00:01.439 ********** 2026-04-06 04:11:01.451771 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 04:11:01.451779 | orchestrator | 2026-04-06 04:11:01.451787 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-06 04:11:01.451795 | orchestrator | Monday 06 April 2026 04:10:57 +0000 (0:00:00.618) 0:00:02.058 ********** 2026-04-06 04:11:01.451803 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 04:11:01.451811 | orchestrator | 2026-04-06 04:11:01.451819 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-06 04:11:01.451827 | orchestrator | Monday 06 April 2026 04:10:58 +0000 (0:00:00.818) 0:00:02.876 ********** 2026-04-06 04:11:01.451835 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:01.451844 | orchestrator | 2026-04-06 04:11:01.451854 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-06 04:11:01.451862 | orchestrator | Monday 06 April 2026 04:10:58 +0000 (0:00:00.152) 0:00:03.029 ********** 2026-04-06 04:11:01.451870 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:01.451878 | orchestrator | 2026-04-06 04:11:01.451887 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-06 04:11:01.451895 | orchestrator | Monday 06 April 2026 04:10:58 +0000 (0:00:00.145) 0:00:03.175 ********** 2026-04-06 04:11:01.451903 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:01.451911 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:11:01.451919 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:11:01.451927 | orchestrator | 2026-04-06 04:11:01.451935 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-06 04:11:01.451943 | orchestrator | Monday 06 April 2026 04:10:59 +0000 (0:00:00.345) 0:00:03.520 ********** 2026-04-06 04:11:01.451971 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:01.451980 | orchestrator | 2026-04-06 04:11:01.451988 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-06 04:11:01.451996 | orchestrator | Monday 06 April 2026 04:10:59 +0000 (0:00:00.155) 0:00:03.676 ********** 2026-04-06 04:11:01.452004 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:01.452012 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:01.452019 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:01.452027 | orchestrator | 2026-04-06 04:11:01.452035 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-06 04:11:01.452045 | orchestrator | Monday 06 April 2026 04:10:59 +0000 (0:00:00.379) 0:00:04.055 ********** 2026-04-06 04:11:01.452055 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:01.452064 | orchestrator | 2026-04-06 04:11:01.452076 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 04:11:01.452090 | orchestrator | Monday 06 April 2026 04:11:00 +0000 (0:00:00.924) 0:00:04.979 ********** 2026-04-06 04:11:01.452104 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:01.452117 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:01.452129 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:01.452141 | orchestrator | 2026-04-06 04:11:01.452153 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-06 04:11:01.452167 | orchestrator | Monday 06 April 2026 04:11:01 +0000 (0:00:00.304) 0:00:05.284 ********** 2026-04-06 04:11:01.452184 | orchestrator | skipping: [testbed-node-3] => (item={'id': '53521ce44c2bb77ba5e7e7e99b11b2b95c81ca77aa66c314d790407a83e3fb88', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-06 04:11:01.452203 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c60c3ea5faa482719d029846fd7186e321898967e536d93ecc381644e7e3dafb', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-06 04:11:01.452220 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7e7ac21aae939875deb0b06e083593af31c8adaec213972772372d16eccba1aa', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-06 04:11:01.452233 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fbc83ca0ebb785bf326d116f9c9ac6fa3e70878faf0579588ac7f558d59b1b5d', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-06 04:11:01.452243 | orchestrator | skipping: [testbed-node-3] => (item={'id': '045f36f81c47c90c532ea89d3e40d497b221bb87114aa4c092a4ef0b9989d1e8', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-06 04:11:01.452333 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd9030fe3b2b120ffac49d98426f14eb938f34dd2be273d20bdce4d41c83e084f', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-06 04:11:01.452354 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f897a4cecbc02415ad38417d73c259e4b683025ca5bc3fa7a12513add1ca6b7a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-06 04:11:01.452368 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'abfb42e6729ac469bbfbd3794062b20da23feee1a549fb37ca9f3223be03dc67', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-06 04:11:01.452395 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e74d6001726166291e1e3e263ae2ed7990a6c73b5bbbb500dae82e93456977bc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.452416 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a6d9536c1d21052676c8ca5339b8eea6ec95f343a775b602420500ed5fe0d620', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.452431 | orchestrator | skipping: [testbed-node-3] => (item={'id': '84dc19cadacfad14aa8400c992365cc572dcb07a266c752705c369d831326c98', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.452444 | orchestrator | ok: [testbed-node-3] => (item={'id': '7cacb7450f3606ff43a1933d3d47892c82441a21568ce8c92a2c84a212ddf792', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-06 04:11:01.452453 | orchestrator | ok: [testbed-node-3] => (item={'id': '7a7705b94c92c6a7d5514c04ed0e8bf178c9300fd8a2a2785b46622788ef2b46', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-06 04:11:01.452461 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c68ec7ad276a884996cd67c30a9720f059ced0d56c0ed6f3b3534492712fbeab', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.452470 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7337d3053fbaf078020b512d8ac1c5a582b23c2372649ee19ee77a7082e10645', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-06 04:11:01.452478 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bac37ecdb0464b1f7bc64d211bfbf2c56c053eba45d6cd0cd83a4b659043dac5', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-06 04:11:01.452487 | orchestrator | skipping: [testbed-node-3] => (item={'id': '35dac4dc6f3ed19a7910d16a304fc6b44361b2cf4947ea5ed6283652f247829e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 04:11:01.452501 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a73ad93ae7fd37cd0007c6f66dbf43f8b4507cd4ebcc40697f5a33a31ba906cc', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 04:11:01.452514 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'da0aa41217bde422d29be7baa27257437d646f132b1fe7db75ce03671ab25a64', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 04:11:01.452528 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6a29e4ac45173845606b2f86016cea6c634df0a17c164d1959e41ba0a22dda6c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-06 04:11:01.452557 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e631a3380be18ab240f9321948d6d63604cf89972fb052f3e94ff754f0705807', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-06 04:11:01.726396 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cdfdcbb0c32fcc0e1d01e698ee28d031c3bf3b983058fea6143b4f6930686118', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-06 04:11:01.726545 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b3a0342cbf6de8e8f39b62925993e62f84ed88f7c3e229d2bc9a0c09476b46a9', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-06 04:11:01.726571 | orchestrator | skipping: [testbed-node-4] => (item={'id': '96d601c8212e51318914b0961fe1f7be20612c51347ee88855cbedb2f7376f92', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-06 04:11:01.726672 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ebfc4127ebb460ae6f4502c1600b78c3c54d4f58e0083e22cc03a7583631414f', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-06 04:11:01.726725 | orchestrator | skipping: [testbed-node-4] => (item={'id': '24bb3ab2900a4a8f2f56f690cf8b922743ee0ea9e9dee2073077161817b95610', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-06 04:11:01.726744 | orchestrator | skipping: [testbed-node-4] => (item={'id': '57ee51912cb6cd8406112b5504e719c4c82d61fa7d50a2abfc04c461faa03ba7', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-06 04:11:01.726761 | orchestrator | skipping: [testbed-node-4] => (item={'id': '38e6582cda34b878b04751d36f4766d3caad55429cb0fb1b40ff639a286ae842', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.726780 | orchestrator | skipping: [testbed-node-4] => (item={'id': '48c1338141860d5d2eddb10444730e68c2aafaa7522d9cdd0b1fa5ee20586251', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.726798 | orchestrator | skipping: [testbed-node-4] => (item={'id': '888e16e59dc55f556f7ac6109d9f0d42b0e52dc7331d697ccc9d76df46e36ced', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.726817 | orchestrator | ok: [testbed-node-4] => (item={'id': '55b1bbdbaba90a2545bc939c5c0ea4519b85ea6579da8cae844fc1e40e608db1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-06 04:11:01.726835 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e13ae293545ffe2b7363a8c9945a2f91895c72e03b9b20390fb232e43c03f63e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-06 04:11:01.726852 | orchestrator | skipping: [testbed-node-4] => (item={'id': '332a420c48413b763d7224bee5d2b956b8fb676587679ed7379fabb44e18c39e', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.726869 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'de64f76862d2d3a8f2398186d41941f7a466fdb76cc479c5ac9996fa32505cfd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-06 04:11:01.726883 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bddd6fb815e9c90e9d8803c30ee1424793efe5626754d94ea004bc89b428ec18', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-06 04:11:01.726927 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9cdd82eb4b04ff378128364e80c6252e972fcf323e87024744d785b01476854e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 04:11:01.726941 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd667832b8eeb84ab625d26d3041811e378dcf95cb9ed495f0baa441858ca871', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 04:11:01.726953 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3f5a9f34506705ee94fca7c9ff1817afe16dacb22fc37f8d7bf3ec4ebfad6437', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 04:11:01.726966 | orchestrator | skipping: [testbed-node-5] => (item={'id': '081b4ddddc7aa602e033fd59a426846b14a360a7cb1f8c24627c316953a559e9', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-06 04:11:01.726984 | orchestrator | skipping: [testbed-node-5] => (item={'id': '23d9f2132f6c22d3696b0eda71cbff8ba4f29cc6b4fbe6ac24cde74ce3219b7b', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-06 04:11:01.726997 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6566953c84ffe4af588454515cf4bb563adf77b728abbb4fd03cc68fb92d6199', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-06 04:11:01.727009 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f86d3621c1cffd266705d43c257b76f21e7edd7b385e71cd7e639527f6bda0d5', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-04-06 04:11:01.727020 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9764cb38c98634ea1f280669319737bcf4351974d9d6c1acd575fca988add097', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-04-06 04:11:01.727032 | orchestrator | skipping: [testbed-node-5] => (item={'id': '47d969f62c261519146df8d649be07698ffa30df3c568d430a6ca7dc2358ce43', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-06 04:11:01.727045 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'befb6a14316ab317e2df7d77e686a2881359aaa1232db3728e0daa6945bb6354', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-04-06 04:11:01.727057 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6f743ac39fad0ee04a3990e5474b0f67c2ab05ac013e576fd68a11e83cf4adce', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-06 04:11:01.727068 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9d0fa98b9cbd10af8c9331244247940319b1a41af62d9f47d5c5a5a9834b1b13', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.727080 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ab0327115132b7c0021f3828249e2e8fb30eea873cebda1d362f26e361b409a8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.727093 | orchestrator | skipping: [testbed-node-5] => (item={'id': '93233d93be561db68bb542b600d57a1417519e9ac30faa0c37366e79ca8ba4b4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:01.727110 | orchestrator | ok: [testbed-node-5] => (item={'id': '95ac3bb0d01842ef64fbf8cdf3280bf584fb8dfc2e821aae53caf707430dd786', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-06 04:11:01.727130 | orchestrator | ok: [testbed-node-5] => (item={'id': '6af9d211579bc7da5fd5f7ac69f6ec8b5cac59680696233ceaa7c37307e03052', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-06 04:11:13.906830 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9f212a483fe4cc6a732fa5024d1867e2bd3b731e32eded2a36dc99b936228099', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 04:11:13.906961 | orchestrator | skipping: [testbed-node-5] => (item={'id': '414e28532b5e71901cd99ae53d01e11b9df272098dde2b449955b980dad7c2b0', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-06 04:11:13.906990 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7f4563fb3c2a2b43e4c94a9bdde3a945bad11657b6d11f1a55c992f46abba79d', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-06 04:11:13.907007 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cd2c55435eaf0e17613e6128f0303eced0065993b88e673863be1d2cce0929f9', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 04:11:13.907022 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ef9c4521800eb018fd57b7dd19d964920eea7abad6b1ce9a78916117b5814f0a', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 04:11:13.907040 | orchestrator | skipping: [testbed-node-5] => (item={'id': '43091e6b2d58b2ff4a9c05b0cadda2b5c7880056319f20a597d3e492730c2523', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 04:11:13.907057 | orchestrator | 2026-04-06 04:11:13.907075 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-06 04:11:13.907094 | orchestrator | Monday 06 April 2026 04:11:01 +0000 (0:00:00.607) 0:00:05.892 ********** 2026-04-06 04:11:13.907110 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.907128 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:13.907144 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:13.907159 | orchestrator | 2026-04-06 04:11:13.907176 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-06 04:11:13.907193 | orchestrator | Monday 06 April 2026 04:11:02 +0000 (0:00:00.345) 0:00:06.237 ********** 2026-04-06 04:11:13.907210 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:13.907222 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:11:13.907232 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:11:13.907242 | orchestrator | 2026-04-06 04:11:13.907253 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-06 04:11:13.907263 | orchestrator | Monday 06 April 2026 04:11:02 +0000 (0:00:00.538) 0:00:06.775 ********** 2026-04-06 04:11:13.907273 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.907283 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:13.907293 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:13.907302 | orchestrator | 2026-04-06 04:11:13.907312 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 04:11:13.907345 | orchestrator | Monday 06 April 2026 04:11:02 +0000 (0:00:00.370) 0:00:07.146 ********** 2026-04-06 04:11:13.907355 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.907366 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:13.907377 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:13.907388 | orchestrator | 2026-04-06 04:11:13.907400 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-06 04:11:13.907411 | orchestrator | Monday 06 April 2026 04:11:03 +0000 (0:00:00.332) 0:00:07.479 ********** 2026-04-06 04:11:13.907438 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-06 04:11:13.907451 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-06 04:11:13.907463 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:13.907474 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-06 04:11:13.907485 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-06 04:11:13.907497 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:11:13.907513 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-06 04:11:13.907529 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-06 04:11:13.907544 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:11:13.907561 | orchestrator | 2026-04-06 04:11:13.907580 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-06 04:11:13.907597 | orchestrator | Monday 06 April 2026 04:11:03 +0000 (0:00:00.365) 0:00:07.844 ********** 2026-04-06 04:11:13.907880 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.907921 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:13.907932 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:13.907941 | orchestrator | 2026-04-06 04:11:13.907952 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-06 04:11:13.907962 | orchestrator | Monday 06 April 2026 04:11:04 +0000 (0:00:00.558) 0:00:08.403 ********** 2026-04-06 04:11:13.907971 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:13.908016 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:11:13.908028 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:11:13.908033 | orchestrator | 2026-04-06 04:11:13.908039 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-06 04:11:13.908044 | orchestrator | Monday 06 April 2026 04:11:04 +0000 (0:00:00.325) 0:00:08.729 ********** 2026-04-06 04:11:13.908049 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:13.908054 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:11:13.908059 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:11:13.908064 | orchestrator | 2026-04-06 04:11:13.908069 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-06 04:11:13.908074 | orchestrator | Monday 06 April 2026 04:11:04 +0000 (0:00:00.349) 0:00:09.078 ********** 2026-04-06 04:11:13.908079 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.908084 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:13.908089 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:13.908094 | orchestrator | 2026-04-06 04:11:13.908099 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 04:11:13.908104 | orchestrator | Monday 06 April 2026 04:11:05 +0000 (0:00:00.343) 0:00:09.421 ********** 2026-04-06 04:11:13.908109 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:13.908114 | orchestrator | 2026-04-06 04:11:13.908119 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 04:11:13.908136 | orchestrator | Monday 06 April 2026 04:11:05 +0000 (0:00:00.759) 0:00:10.181 ********** 2026-04-06 04:11:13.908141 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:13.908146 | orchestrator | 2026-04-06 04:11:13.908151 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 04:11:13.908167 | orchestrator | Monday 06 April 2026 04:11:06 +0000 (0:00:00.266) 0:00:10.448 ********** 2026-04-06 04:11:13.908174 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:13.908182 | orchestrator | 2026-04-06 04:11:13.908190 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:11:13.908198 | orchestrator | Monday 06 April 2026 04:11:06 +0000 (0:00:00.284) 0:00:10.732 ********** 2026-04-06 04:11:13.908206 | orchestrator | 2026-04-06 04:11:13.908215 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:11:13.908223 | orchestrator | Monday 06 April 2026 04:11:06 +0000 (0:00:00.074) 0:00:10.807 ********** 2026-04-06 04:11:13.908230 | orchestrator | 2026-04-06 04:11:13.908239 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:11:13.908246 | orchestrator | Monday 06 April 2026 04:11:06 +0000 (0:00:00.093) 0:00:10.900 ********** 2026-04-06 04:11:13.908251 | orchestrator | 2026-04-06 04:11:13.908256 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 04:11:13.908261 | orchestrator | Monday 06 April 2026 04:11:06 +0000 (0:00:00.085) 0:00:10.986 ********** 2026-04-06 04:11:13.908266 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:13.908270 | orchestrator | 2026-04-06 04:11:13.908275 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-06 04:11:13.908280 | orchestrator | Monday 06 April 2026 04:11:07 +0000 (0:00:00.272) 0:00:11.259 ********** 2026-04-06 04:11:13.908285 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:13.908290 | orchestrator | 2026-04-06 04:11:13.908295 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 04:11:13.908300 | orchestrator | Monday 06 April 2026 04:11:07 +0000 (0:00:00.308) 0:00:11.567 ********** 2026-04-06 04:11:13.908304 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.908309 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:13.908314 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:13.908319 | orchestrator | 2026-04-06 04:11:13.908324 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-06 04:11:13.908329 | orchestrator | Monday 06 April 2026 04:11:07 +0000 (0:00:00.340) 0:00:11.908 ********** 2026-04-06 04:11:13.908337 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.908344 | orchestrator | 2026-04-06 04:11:13.908353 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-06 04:11:13.908362 | orchestrator | Monday 06 April 2026 04:11:08 +0000 (0:00:00.782) 0:00:12.690 ********** 2026-04-06 04:11:13.908370 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 04:11:13.908378 | orchestrator | 2026-04-06 04:11:13.908383 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-06 04:11:13.908388 | orchestrator | Monday 06 April 2026 04:11:10 +0000 (0:00:01.643) 0:00:14.334 ********** 2026-04-06 04:11:13.908393 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.908398 | orchestrator | 2026-04-06 04:11:13.908403 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-06 04:11:13.908407 | orchestrator | Monday 06 April 2026 04:11:10 +0000 (0:00:00.141) 0:00:14.476 ********** 2026-04-06 04:11:13.908412 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.908417 | orchestrator | 2026-04-06 04:11:13.908422 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-06 04:11:13.908427 | orchestrator | Monday 06 April 2026 04:11:10 +0000 (0:00:00.387) 0:00:14.864 ********** 2026-04-06 04:11:13.908432 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:13.908437 | orchestrator | 2026-04-06 04:11:13.908442 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-06 04:11:13.908447 | orchestrator | Monday 06 April 2026 04:11:10 +0000 (0:00:00.128) 0:00:14.992 ********** 2026-04-06 04:11:13.908452 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.908457 | orchestrator | 2026-04-06 04:11:13.908462 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 04:11:13.908467 | orchestrator | Monday 06 April 2026 04:11:10 +0000 (0:00:00.136) 0:00:15.128 ********** 2026-04-06 04:11:13.908477 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:13.908483 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:13.908491 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:13.908500 | orchestrator | 2026-04-06 04:11:13.908508 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-06 04:11:13.908516 | orchestrator | Monday 06 April 2026 04:11:11 +0000 (0:00:00.329) 0:00:15.458 ********** 2026-04-06 04:11:13.908524 | orchestrator | changed: [testbed-node-3] 2026-04-06 04:11:13.908532 | orchestrator | changed: [testbed-node-4] 2026-04-06 04:11:13.908542 | orchestrator | changed: [testbed-node-5] 2026-04-06 04:11:25.487775 | orchestrator | 2026-04-06 04:11:25.487896 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-06 04:11:25.487914 | orchestrator | Monday 06 April 2026 04:11:13 +0000 (0:00:02.617) 0:00:18.075 ********** 2026-04-06 04:11:25.487927 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:25.487939 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:25.487951 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:25.487962 | orchestrator | 2026-04-06 04:11:25.487974 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-06 04:11:25.487985 | orchestrator | Monday 06 April 2026 04:11:14 +0000 (0:00:00.405) 0:00:18.480 ********** 2026-04-06 04:11:25.487996 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:25.488007 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:25.488018 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:25.488029 | orchestrator | 2026-04-06 04:11:25.488040 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-06 04:11:25.488052 | orchestrator | Monday 06 April 2026 04:11:14 +0000 (0:00:00.620) 0:00:19.101 ********** 2026-04-06 04:11:25.488063 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:25.488075 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:11:25.488086 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:11:25.488097 | orchestrator | 2026-04-06 04:11:25.488108 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-06 04:11:25.488137 | orchestrator | Monday 06 April 2026 04:11:15 +0000 (0:00:00.358) 0:00:19.459 ********** 2026-04-06 04:11:25.488148 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:25.488159 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:25.488170 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:25.488181 | orchestrator | 2026-04-06 04:11:25.488192 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-06 04:11:25.488203 | orchestrator | Monday 06 April 2026 04:11:15 +0000 (0:00:00.645) 0:00:20.105 ********** 2026-04-06 04:11:25.488214 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:25.488225 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:11:25.488236 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:11:25.488247 | orchestrator | 2026-04-06 04:11:25.488258 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-06 04:11:25.488270 | orchestrator | Monday 06 April 2026 04:11:16 +0000 (0:00:00.349) 0:00:20.455 ********** 2026-04-06 04:11:25.488281 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:25.488292 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:11:25.488303 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:11:25.488314 | orchestrator | 2026-04-06 04:11:25.488325 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 04:11:25.488336 | orchestrator | Monday 06 April 2026 04:11:16 +0000 (0:00:00.348) 0:00:20.804 ********** 2026-04-06 04:11:25.488347 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:25.488358 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:25.488369 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:25.488380 | orchestrator | 2026-04-06 04:11:25.488392 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-06 04:11:25.488403 | orchestrator | Monday 06 April 2026 04:11:17 +0000 (0:00:00.584) 0:00:21.388 ********** 2026-04-06 04:11:25.488414 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:25.488447 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:25.488459 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:25.488469 | orchestrator | 2026-04-06 04:11:25.488481 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-06 04:11:25.488491 | orchestrator | Monday 06 April 2026 04:11:18 +0000 (0:00:00.847) 0:00:22.235 ********** 2026-04-06 04:11:25.488502 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:25.488513 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:25.488524 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:25.488534 | orchestrator | 2026-04-06 04:11:25.488545 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-06 04:11:25.488556 | orchestrator | Monday 06 April 2026 04:11:18 +0000 (0:00:00.353) 0:00:22.589 ********** 2026-04-06 04:11:25.488567 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:25.488578 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:11:25.488588 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:11:25.488599 | orchestrator | 2026-04-06 04:11:25.488643 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-06 04:11:25.488656 | orchestrator | Monday 06 April 2026 04:11:18 +0000 (0:00:00.336) 0:00:22.925 ********** 2026-04-06 04:11:25.488667 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:11:25.488678 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:11:25.488700 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:11:25.488711 | orchestrator | 2026-04-06 04:11:25.488722 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-06 04:11:25.488733 | orchestrator | Monday 06 April 2026 04:11:19 +0000 (0:00:00.581) 0:00:23.507 ********** 2026-04-06 04:11:25.488743 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 04:11:25.488755 | orchestrator | 2026-04-06 04:11:25.488766 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-06 04:11:25.488776 | orchestrator | Monday 06 April 2026 04:11:19 +0000 (0:00:00.298) 0:00:23.805 ********** 2026-04-06 04:11:25.488787 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:11:25.488798 | orchestrator | 2026-04-06 04:11:25.488809 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 04:11:25.488820 | orchestrator | Monday 06 April 2026 04:11:19 +0000 (0:00:00.278) 0:00:24.083 ********** 2026-04-06 04:11:25.488830 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 04:11:25.488841 | orchestrator | 2026-04-06 04:11:25.488852 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 04:11:25.488862 | orchestrator | Monday 06 April 2026 04:11:21 +0000 (0:00:01.865) 0:00:25.949 ********** 2026-04-06 04:11:25.488873 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 04:11:25.488884 | orchestrator | 2026-04-06 04:11:25.488895 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 04:11:25.488906 | orchestrator | Monday 06 April 2026 04:11:22 +0000 (0:00:00.325) 0:00:26.274 ********** 2026-04-06 04:11:25.488917 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 04:11:25.488928 | orchestrator | 2026-04-06 04:11:25.488959 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:11:25.488971 | orchestrator | Monday 06 April 2026 04:11:22 +0000 (0:00:00.294) 0:00:26.569 ********** 2026-04-06 04:11:25.488981 | orchestrator | 2026-04-06 04:11:25.488992 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:11:25.489003 | orchestrator | Monday 06 April 2026 04:11:22 +0000 (0:00:00.103) 0:00:26.672 ********** 2026-04-06 04:11:25.489014 | orchestrator | 2026-04-06 04:11:25.489025 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 04:11:25.489035 | orchestrator | Monday 06 April 2026 04:11:22 +0000 (0:00:00.076) 0:00:26.749 ********** 2026-04-06 04:11:25.489046 | orchestrator | 2026-04-06 04:11:25.489057 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-06 04:11:25.489068 | orchestrator | Monday 06 April 2026 04:11:22 +0000 (0:00:00.077) 0:00:26.827 ********** 2026-04-06 04:11:25.489088 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 04:11:25.489099 | orchestrator | 2026-04-06 04:11:25.489110 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 04:11:25.489121 | orchestrator | Monday 06 April 2026 04:11:24 +0000 (0:00:01.764) 0:00:28.591 ********** 2026-04-06 04:11:25.489137 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-06 04:11:25.489149 | orchestrator |  "msg": [ 2026-04-06 04:11:25.489160 | orchestrator |  "Validator run completed.", 2026-04-06 04:11:25.489171 | orchestrator |  "You can find the report file here:", 2026-04-06 04:11:25.489183 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-06T04:10:57+00:00-report.json", 2026-04-06 04:11:25.489195 | orchestrator |  "on the following host:", 2026-04-06 04:11:25.489206 | orchestrator |  "testbed-manager" 2026-04-06 04:11:25.489217 | orchestrator |  ] 2026-04-06 04:11:25.489228 | orchestrator | } 2026-04-06 04:11:25.489240 | orchestrator | 2026-04-06 04:11:25.489251 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:11:25.489263 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 04:11:25.489276 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-06 04:11:25.489287 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-06 04:11:25.489298 | orchestrator | 2026-04-06 04:11:25.489309 | orchestrator | 2026-04-06 04:11:25.489320 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:11:25.489331 | orchestrator | Monday 06 April 2026 04:11:25 +0000 (0:00:00.679) 0:00:29.270 ********** 2026-04-06 04:11:25.489342 | orchestrator | =============================================================================== 2026-04-06 04:11:25.489352 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.62s 2026-04-06 04:11:25.489363 | orchestrator | Aggregate test results step one ----------------------------------------- 1.87s 2026-04-06 04:11:25.489374 | orchestrator | Write report file ------------------------------------------------------- 1.76s 2026-04-06 04:11:25.489385 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.64s 2026-04-06 04:11:25.489403 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.92s 2026-04-06 04:11:25.489434 | orchestrator | Get timestamp for report file ------------------------------------------- 0.92s 2026-04-06 04:11:25.489453 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.85s 2026-04-06 04:11:25.489473 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2026-04-06 04:11:25.489490 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.78s 2026-04-06 04:11:25.489512 | orchestrator | Aggregate test results step one ----------------------------------------- 0.76s 2026-04-06 04:11:25.489537 | orchestrator | Print report file information ------------------------------------------- 0.68s 2026-04-06 04:11:25.489555 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.65s 2026-04-06 04:11:25.489571 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.62s 2026-04-06 04:11:25.489589 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.62s 2026-04-06 04:11:25.489681 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.61s 2026-04-06 04:11:25.489704 | orchestrator | Prepare test data ------------------------------------------------------- 0.58s 2026-04-06 04:11:25.489724 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.58s 2026-04-06 04:11:25.489742 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.56s 2026-04-06 04:11:25.489776 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.54s 2026-04-06 04:11:25.489796 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.41s 2026-04-06 04:11:25.880067 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-06 04:11:25.888552 | orchestrator | + set -e 2026-04-06 04:11:25.888650 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 04:11:25.889408 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 04:11:25.889422 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 04:11:25.889431 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 04:11:25.889489 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 04:11:25.889498 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 04:11:25.889578 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 04:11:25.889585 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 04:11:25.889686 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 04:11:25.889692 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 04:11:25.889697 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 04:11:25.889702 | orchestrator | ++ export ARA=false 2026-04-06 04:11:25.889707 | orchestrator | ++ ARA=false 2026-04-06 04:11:25.889712 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 04:11:25.889717 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 04:11:25.889721 | orchestrator | ++ export TEMPEST=false 2026-04-06 04:11:25.889726 | orchestrator | ++ TEMPEST=false 2026-04-06 04:11:25.889731 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 04:11:25.889736 | orchestrator | ++ IS_ZUUL=true 2026-04-06 04:11:25.889741 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 04:11:25.889746 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 04:11:25.889751 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 04:11:25.889756 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 04:11:25.889760 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 04:11:25.889765 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 04:11:25.889774 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 04:11:25.889779 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 04:11:25.889784 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 04:11:25.889851 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 04:11:25.889858 | orchestrator | + source /etc/os-release 2026-04-06 04:11:25.889862 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-06 04:11:25.889867 | orchestrator | ++ NAME=Ubuntu 2026-04-06 04:11:25.889872 | orchestrator | ++ VERSION_ID=24.04 2026-04-06 04:11:25.889876 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-06 04:11:25.889881 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-06 04:11:25.889885 | orchestrator | ++ ID=ubuntu 2026-04-06 04:11:25.889890 | orchestrator | ++ ID_LIKE=debian 2026-04-06 04:11:25.889895 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-06 04:11:25.889899 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-06 04:11:25.889904 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-06 04:11:25.889909 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-06 04:11:25.889914 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-06 04:11:25.889919 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-06 04:11:25.889926 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-06 04:11:25.889932 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-06 04:11:25.889937 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-06 04:11:25.921693 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-06 04:11:50.745799 | orchestrator | 2026-04-06 04:11:50.745877 | orchestrator | # Status of Elasticsearch 2026-04-06 04:11:50.745885 | orchestrator | 2026-04-06 04:11:50.745892 | orchestrator | + pushd /opt/configuration/contrib 2026-04-06 04:11:50.745898 | orchestrator | + echo 2026-04-06 04:11:50.745903 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-06 04:11:50.745908 | orchestrator | + echo 2026-04-06 04:11:50.745914 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-06 04:11:50.944480 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-06 04:11:50.944588 | orchestrator | 2026-04-06 04:11:50.944639 | orchestrator | # Status of MariaDB 2026-04-06 04:11:50.944654 | orchestrator | 2026-04-06 04:11:50.944665 | orchestrator | + echo 2026-04-06 04:11:50.944675 | orchestrator | + echo '# Status of MariaDB' 2026-04-06 04:11:50.944685 | orchestrator | + echo 2026-04-06 04:11:50.944992 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-06 04:11:50.991142 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-06 04:11:50.991249 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-06 04:11:50.991262 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-06 04:11:50.991273 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-06 04:11:51.052886 | orchestrator | Reading package lists... 2026-04-06 04:11:51.449255 | orchestrator | Building dependency tree... 2026-04-06 04:11:51.451019 | orchestrator | Reading state information... 2026-04-06 04:11:51.952007 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-06 04:11:51.952112 | orchestrator | bc set to manually installed. 2026-04-06 04:11:51.952130 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-04-06 04:11:52.696279 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-06 04:11:52.696539 | orchestrator | 2026-04-06 04:11:52.696565 | orchestrator | # Status of Prometheus 2026-04-06 04:11:52.696578 | orchestrator | 2026-04-06 04:11:52.696589 | orchestrator | + echo 2026-04-06 04:11:52.696600 | orchestrator | + echo '# Status of Prometheus' 2026-04-06 04:11:52.696729 | orchestrator | + echo 2026-04-06 04:11:52.696752 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-06 04:11:52.775384 | orchestrator | Unauthorized 2026-04-06 04:11:52.782674 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-06 04:11:52.854590 | orchestrator | Unauthorized 2026-04-06 04:11:52.858878 | orchestrator | 2026-04-06 04:11:52.858934 | orchestrator | # Status of RabbitMQ 2026-04-06 04:11:52.858940 | orchestrator | 2026-04-06 04:11:52.858946 | orchestrator | + echo 2026-04-06 04:11:52.858951 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-06 04:11:52.858956 | orchestrator | + echo 2026-04-06 04:11:52.859344 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-06 04:11:52.921720 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-06 04:11:52.921788 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-06 04:11:52.921795 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-06 04:11:53.462076 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-06 04:11:53.472850 | orchestrator | 2026-04-06 04:11:53.472937 | orchestrator | # Status of Redis 2026-04-06 04:11:53.472949 | orchestrator | 2026-04-06 04:11:53.472959 | orchestrator | + echo 2026-04-06 04:11:53.472969 | orchestrator | + echo '# Status of Redis' 2026-04-06 04:11:53.472979 | orchestrator | + echo 2026-04-06 04:11:53.472990 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-06 04:11:53.479338 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002111s;;;0.000000;10.000000 2026-04-06 04:11:53.479418 | orchestrator | + popd 2026-04-06 04:11:53.479428 | orchestrator | 2026-04-06 04:11:53.479438 | orchestrator | + echo 2026-04-06 04:11:53.479447 | orchestrator | # Create backup of MariaDB database 2026-04-06 04:11:53.479457 | orchestrator | 2026-04-06 04:11:53.479467 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-06 04:11:53.479477 | orchestrator | + echo 2026-04-06 04:11:53.479486 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-06 04:11:56.122819 | orchestrator | 2026-04-06 04:11:56 | INFO  | Task b983c441-26be-4536-af56-9980feb9b7ee (mariadb_backup) was prepared for execution. 2026-04-06 04:11:56.122896 | orchestrator | 2026-04-06 04:11:56 | INFO  | It takes a moment until task b983c441-26be-4536-af56-9980feb9b7ee (mariadb_backup) has been started and output is visible here. 2026-04-06 04:12:28.133585 | orchestrator | 2026-04-06 04:12:28.133719 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 04:12:28.133730 | orchestrator | 2026-04-06 04:12:28.133750 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 04:12:28.133757 | orchestrator | Monday 06 April 2026 04:12:00 +0000 (0:00:00.201) 0:00:00.201 ********** 2026-04-06 04:12:28.133762 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:12:28.133812 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:12:28.133818 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:12:28.133823 | orchestrator | 2026-04-06 04:12:28.133828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 04:12:28.133834 | orchestrator | Monday 06 April 2026 04:12:01 +0000 (0:00:00.351) 0:00:00.552 ********** 2026-04-06 04:12:28.133839 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-06 04:12:28.133845 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-06 04:12:28.133850 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-06 04:12:28.133855 | orchestrator | 2026-04-06 04:12:28.133860 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-06 04:12:28.133865 | orchestrator | 2026-04-06 04:12:28.133870 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-06 04:12:28.133875 | orchestrator | Monday 06 April 2026 04:12:01 +0000 (0:00:00.641) 0:00:01.194 ********** 2026-04-06 04:12:28.133880 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 04:12:28.133885 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-06 04:12:28.133890 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-06 04:12:28.133897 | orchestrator | 2026-04-06 04:12:28.133905 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-06 04:12:28.133916 | orchestrator | Monday 06 April 2026 04:12:02 +0000 (0:00:00.455) 0:00:01.650 ********** 2026-04-06 04:12:28.133925 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:12:28.133933 | orchestrator | 2026-04-06 04:12:28.133941 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-06 04:12:28.133949 | orchestrator | Monday 06 April 2026 04:12:03 +0000 (0:00:00.600) 0:00:02.250 ********** 2026-04-06 04:12:28.133957 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:12:28.133965 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:12:28.133972 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:12:28.133979 | orchestrator | 2026-04-06 04:12:28.133987 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-06 04:12:28.133994 | orchestrator | Monday 06 April 2026 04:12:06 +0000 (0:00:03.660) 0:00:05.910 ********** 2026-04-06 04:12:28.134002 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-06 04:12:28.134010 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-06 04:12:28.134048 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-06 04:12:28.134057 | orchestrator | mariadb_bootstrap_restart 2026-04-06 04:12:28.134065 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:12:28.134074 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:12:28.134082 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:12:28.134091 | orchestrator | 2026-04-06 04:12:28.134099 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-06 04:12:28.134107 | orchestrator | skipping: no hosts matched 2026-04-06 04:12:28.134116 | orchestrator | 2026-04-06 04:12:28.134125 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-06 04:12:28.134134 | orchestrator | skipping: no hosts matched 2026-04-06 04:12:28.134143 | orchestrator | 2026-04-06 04:12:28.134152 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-06 04:12:28.134161 | orchestrator | skipping: no hosts matched 2026-04-06 04:12:28.134170 | orchestrator | 2026-04-06 04:12:28.134179 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-06 04:12:28.134186 | orchestrator | 2026-04-06 04:12:28.134192 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-06 04:12:28.134197 | orchestrator | Monday 06 April 2026 04:12:26 +0000 (0:00:20.249) 0:00:26.160 ********** 2026-04-06 04:12:28.134203 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:12:28.134209 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:12:28.134221 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:12:28.134227 | orchestrator | 2026-04-06 04:12:28.134233 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-06 04:12:28.134239 | orchestrator | Monday 06 April 2026 04:12:27 +0000 (0:00:00.349) 0:00:26.510 ********** 2026-04-06 04:12:28.134244 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:12:28.134250 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:12:28.134255 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:12:28.134261 | orchestrator | 2026-04-06 04:12:28.134267 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:12:28.134274 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:12:28.134281 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 04:12:28.134287 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 04:12:28.134293 | orchestrator | 2026-04-06 04:12:28.134298 | orchestrator | 2026-04-06 04:12:28.134304 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:12:28.134309 | orchestrator | Monday 06 April 2026 04:12:27 +0000 (0:00:00.454) 0:00:26.964 ********** 2026-04-06 04:12:28.134315 | orchestrator | =============================================================================== 2026-04-06 04:12:28.134321 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 20.25s 2026-04-06 04:12:28.134340 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.66s 2026-04-06 04:12:28.134346 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-04-06 04:12:28.134352 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.60s 2026-04-06 04:12:28.134357 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.46s 2026-04-06 04:12:28.134363 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.45s 2026-04-06 04:12:28.134369 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-04-06 04:12:28.134375 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.35s 2026-04-06 04:12:28.499604 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-06 04:12:28.509824 | orchestrator | + set -e 2026-04-06 04:12:28.509919 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 04:12:28.510775 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 04:12:28.510806 | orchestrator | ++ INTERACTIVE=false 2026-04-06 04:12:28.510818 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 04:12:28.510830 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 04:12:28.510842 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-06 04:12:28.512653 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-06 04:12:28.519001 | orchestrator | 2026-04-06 04:12:28.519064 | orchestrator | # OpenStack endpoints 2026-04-06 04:12:28.519071 | orchestrator | 2026-04-06 04:12:28.519078 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 04:12:28.519084 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 04:12:28.519090 | orchestrator | + export OS_CLOUD=admin 2026-04-06 04:12:28.519096 | orchestrator | + OS_CLOUD=admin 2026-04-06 04:12:28.519102 | orchestrator | + echo 2026-04-06 04:12:28.519108 | orchestrator | + echo '# OpenStack endpoints' 2026-04-06 04:12:28.519113 | orchestrator | + echo 2026-04-06 04:12:28.519119 | orchestrator | + openstack endpoint list 2026-04-06 04:12:32.047151 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-06 04:12:32.047237 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-06 04:12:32.047248 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-06 04:12:32.047274 | orchestrator | | 00c7043dd98b41e4957e8da8ac6a5bd9 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-06 04:12:32.047290 | orchestrator | | 0210447ba9f74cdb84201782bc8dbf0f | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-06 04:12:32.047306 | orchestrator | | 0de5076405154708bb4ef31cf2ae5b49 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-06 04:12:32.047317 | orchestrator | | 10b83992a88a4f86b57d29142b13d3ac | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-06 04:12:32.047329 | orchestrator | | 183527a132bf4c488295cd7a6199d8dd | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-06 04:12:32.047340 | orchestrator | | 1a58ad23aca54c2ba6a32a6a1def4e95 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-06 04:12:32.047353 | orchestrator | | 1b7381892f864f4d9f195bdf65110653 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-06 04:12:32.047365 | orchestrator | | 23b8ead015644e2e80b802cfe5c7c497 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-06 04:12:32.047376 | orchestrator | | 255f85dd8c79400c89e23619d5b60091 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-06 04:12:32.047387 | orchestrator | | 26ea840d019e489d81498deb30f9b5fd | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-06 04:12:32.047397 | orchestrator | | 315f6e63b173498c9087564076b669cb | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-06 04:12:32.047409 | orchestrator | | 39e2a961f3f442c893dab6e2dca4c84b | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-06 04:12:32.047420 | orchestrator | | 3e4b441f6f844d3f930629e0a6e1c43a | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-06 04:12:32.047431 | orchestrator | | 5515f9518124464b8802c7d5ea7d534a | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-06 04:12:32.047443 | orchestrator | | 7118b6da031e482f877364eb1ea560c9 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-06 04:12:32.047456 | orchestrator | | 7b358489a0fd41de94297d1a6de28272 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-06 04:12:32.047468 | orchestrator | | 7dedf4f3df4140cc893672197944356b | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-06 04:12:32.047481 | orchestrator | | 94e6b5c000d04f01b85d59361ca95575 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-06 04:12:32.047492 | orchestrator | | 97d284ddca284777a8a6592ef76120cd | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-06 04:12:32.047505 | orchestrator | | 98ed7244d9964eda9c5676ac3e14a1d5 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-06 04:12:32.047540 | orchestrator | | a66acc85e2e94652b0c775779e867d04 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-06 04:12:32.047555 | orchestrator | | b74613dd69504680920e53e2fecac473 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-06 04:12:32.047563 | orchestrator | | b9c8f5aa4bca4270950ead29b62b2462 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-06 04:12:32.047570 | orchestrator | | d0367dc4e42846f18ade8ec02f82357a | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-06 04:12:32.047577 | orchestrator | | d69ad87518b841ae9ab2601c819718cf | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-06 04:12:32.047585 | orchestrator | | d88e3e4cd5a14882a0092840325cdaa5 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-06 04:12:32.047592 | orchestrator | | ebd3e823599f4a55b73452cfb1a1185f | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-06 04:12:32.047599 | orchestrator | | ec958dc3331c4d5aa550324ee4dee13c | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-06 04:12:32.047606 | orchestrator | | ecbd6d624c6a457ab8e42c5da2ea2e9d | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-06 04:12:32.047635 | orchestrator | | f1578e57c413477e9b961874dcb46f94 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-06 04:12:32.047643 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-06 04:12:32.353961 | orchestrator | 2026-04-06 04:12:32.354119 | orchestrator | # Cinder 2026-04-06 04:12:32.354136 | orchestrator | 2026-04-06 04:12:32.354148 | orchestrator | + echo 2026-04-06 04:12:32.354160 | orchestrator | + echo '# Cinder' 2026-04-06 04:12:32.354171 | orchestrator | + echo 2026-04-06 04:12:32.354182 | orchestrator | + openstack volume service list 2026-04-06 04:12:35.186718 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-06 04:12:35.186804 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-06 04:12:35.186814 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-06 04:12:35.186828 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-06T04:12:31.000000 | 2026-04-06 04:12:35.186835 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-06T04:12:31.000000 | 2026-04-06 04:12:35.186841 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-06T04:12:31.000000 | 2026-04-06 04:12:35.186849 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-06T04:12:30.000000 | 2026-04-06 04:12:35.186860 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-06T04:12:28.000000 | 2026-04-06 04:12:35.186871 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-06T04:12:29.000000 | 2026-04-06 04:12:35.186881 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-06T04:12:31.000000 | 2026-04-06 04:12:35.186890 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-06T04:12:33.000000 | 2026-04-06 04:12:35.186926 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-06T04:12:33.000000 | 2026-04-06 04:12:35.186936 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-06 04:12:35.542902 | orchestrator | 2026-04-06 04:12:35.542999 | orchestrator | # Neutron 2026-04-06 04:12:35.543011 | orchestrator | 2026-04-06 04:12:35.543019 | orchestrator | + echo 2026-04-06 04:12:35.543027 | orchestrator | + echo '# Neutron' 2026-04-06 04:12:35.543036 | orchestrator | + echo 2026-04-06 04:12:35.543043 | orchestrator | + openstack network agent list 2026-04-06 04:12:38.355715 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-06 04:12:38.355823 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-06 04:12:38.355835 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-06 04:12:38.355843 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-06 04:12:38.355851 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-06 04:12:38.355876 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-06 04:12:38.355894 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-06 04:12:38.356675 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-06 04:12:38.356698 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-06 04:12:38.356708 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-06 04:12:38.356717 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-06 04:12:38.356726 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-06 04:12:38.356735 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-06 04:12:38.711604 | orchestrator | + openstack network service provider list 2026-04-06 04:12:41.416542 | orchestrator | +---------------+------+---------+ 2026-04-06 04:12:41.416700 | orchestrator | | Service Type | Name | Default | 2026-04-06 04:12:41.416726 | orchestrator | +---------------+------+---------+ 2026-04-06 04:12:41.416748 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-06 04:12:41.416766 | orchestrator | +---------------+------+---------+ 2026-04-06 04:12:41.756926 | orchestrator | 2026-04-06 04:12:41.757031 | orchestrator | # Nova 2026-04-06 04:12:41.757049 | orchestrator | 2026-04-06 04:12:41.757062 | orchestrator | + echo 2026-04-06 04:12:41.757073 | orchestrator | + echo '# Nova' 2026-04-06 04:12:41.757085 | orchestrator | + echo 2026-04-06 04:12:41.757097 | orchestrator | + openstack compute service list 2026-04-06 04:12:44.742231 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-06 04:12:44.742336 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-06 04:12:44.742350 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-06 04:12:44.742388 | orchestrator | | a657bfe1-3fd1-47e4-bce0-32ec1c211dcf | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-06T04:12:36.000000 | 2026-04-06 04:12:44.742399 | orchestrator | | 232d3831-9f87-4f64-8ff9-0c6d96091b27 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-06T04:12:40.000000 | 2026-04-06 04:12:44.742408 | orchestrator | | ee8f610d-26d6-4eee-8468-60b19edf4d4f | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-06T04:12:41.000000 | 2026-04-06 04:12:44.742419 | orchestrator | | 59cd50b5-e990-4047-a03b-eb3483aba8f2 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-06T04:12:34.000000 | 2026-04-06 04:12:44.742429 | orchestrator | | 0b2c0489-0f27-4e2c-90b3-304234806bc1 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-06T04:12:36.000000 | 2026-04-06 04:12:44.742439 | orchestrator | | f2b671a6-23d3-4b65-96ca-30de89a29ddc | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-06T04:12:36.000000 | 2026-04-06 04:12:44.742449 | orchestrator | | d70bd592-fe8c-46ed-9b60-339b84eed0f2 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-06T04:12:37.000000 | 2026-04-06 04:12:44.742458 | orchestrator | | b3c36238-2c97-4ec1-88de-edbfde4400ea | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-06T04:12:37.000000 | 2026-04-06 04:12:44.742468 | orchestrator | | b4c648b1-f12b-41f7-8592-2a14ad52fc46 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-06T04:12:38.000000 | 2026-04-06 04:12:44.742478 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-06 04:12:45.050279 | orchestrator | + openstack hypervisor list 2026-04-06 04:12:47.896519 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-06 04:12:47.896746 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-06 04:12:47.896779 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-06 04:12:47.896799 | orchestrator | | 17c0be7d-16bc-4a51-b0e4-4d4f5bc1c7bb | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-06 04:12:47.896819 | orchestrator | | 383f23fc-0ca6-42fb-8b66-be3e2ce2de68 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-06 04:12:47.896838 | orchestrator | | b8e90d29-893f-4ddc-89c5-12fe53efef7d | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-06 04:12:47.896852 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-06 04:12:48.252492 | orchestrator | 2026-04-06 04:12:48.252676 | orchestrator | # Run OpenStack test play 2026-04-06 04:12:48.252707 | orchestrator | 2026-04-06 04:12:48.252727 | orchestrator | + echo 2026-04-06 04:12:48.252747 | orchestrator | + echo '# Run OpenStack test play' 2026-04-06 04:12:48.252773 | orchestrator | + echo 2026-04-06 04:12:48.252794 | orchestrator | + osism apply --environment openstack test 2026-04-06 04:12:50.506385 | orchestrator | 2026-04-06 04:12:50 | INFO  | Trying to run play test in environment openstack 2026-04-06 04:13:00.631832 | orchestrator | 2026-04-06 04:13:00 | INFO  | Task 18c16d72-0cee-4c6c-91d0-b0b2ffef2901 (test) was prepared for execution. 2026-04-06 04:13:00.631958 | orchestrator | 2026-04-06 04:13:00 | INFO  | It takes a moment until task 18c16d72-0cee-4c6c-91d0-b0b2ffef2901 (test) has been started and output is visible here. 2026-04-06 04:16:17.976180 | orchestrator | 2026-04-06 04:16:17.976280 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-06 04:16:17.976293 | orchestrator | 2026-04-06 04:16:17.976301 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-06 04:16:17.976309 | orchestrator | Monday 06 April 2026 04:13:05 +0000 (0:00:00.093) 0:00:00.093 ********** 2026-04-06 04:16:17.976316 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976324 | orchestrator | 2026-04-06 04:16:17.976331 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-06 04:16:17.976338 | orchestrator | Monday 06 April 2026 04:13:09 +0000 (0:00:04.041) 0:00:04.135 ********** 2026-04-06 04:16:17.976366 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976374 | orchestrator | 2026-04-06 04:16:17.976381 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-06 04:16:17.976387 | orchestrator | Monday 06 April 2026 04:13:13 +0000 (0:00:04.571) 0:00:08.707 ********** 2026-04-06 04:16:17.976393 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976400 | orchestrator | 2026-04-06 04:16:17.976406 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-06 04:16:17.976414 | orchestrator | Monday 06 April 2026 04:13:20 +0000 (0:00:07.122) 0:00:15.829 ********** 2026-04-06 04:16:17.976420 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976426 | orchestrator | 2026-04-06 04:16:17.976435 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-06 04:16:17.976442 | orchestrator | Monday 06 April 2026 04:13:25 +0000 (0:00:04.339) 0:00:20.169 ********** 2026-04-06 04:16:17.976449 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976456 | orchestrator | 2026-04-06 04:16:17.976463 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-06 04:16:17.976469 | orchestrator | Monday 06 April 2026 04:13:29 +0000 (0:00:04.623) 0:00:24.792 ********** 2026-04-06 04:16:17.976475 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-06 04:16:17.976482 | orchestrator | changed: [localhost] => (item=member) 2026-04-06 04:16:17.976490 | orchestrator | changed: [localhost] => (item=creator) 2026-04-06 04:16:17.976496 | orchestrator | 2026-04-06 04:16:17.976503 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-06 04:16:17.976510 | orchestrator | Monday 06 April 2026 04:13:42 +0000 (0:00:12.409) 0:00:37.201 ********** 2026-04-06 04:16:17.976516 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976523 | orchestrator | 2026-04-06 04:16:17.976544 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-06 04:16:17.976551 | orchestrator | Monday 06 April 2026 04:13:47 +0000 (0:00:04.712) 0:00:41.914 ********** 2026-04-06 04:16:17.976558 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976565 | orchestrator | 2026-04-06 04:16:17.976571 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-06 04:16:17.976578 | orchestrator | Monday 06 April 2026 04:13:52 +0000 (0:00:05.236) 0:00:47.151 ********** 2026-04-06 04:16:17.976584 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976590 | orchestrator | 2026-04-06 04:16:17.976597 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-06 04:16:17.976603 | orchestrator | Monday 06 April 2026 04:13:56 +0000 (0:00:04.543) 0:00:51.695 ********** 2026-04-06 04:16:17.976610 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976616 | orchestrator | 2026-04-06 04:16:17.976623 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-06 04:16:17.976629 | orchestrator | Monday 06 April 2026 04:14:01 +0000 (0:00:04.217) 0:00:55.912 ********** 2026-04-06 04:16:17.976635 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976684 | orchestrator | 2026-04-06 04:16:17.976692 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-06 04:16:17.976697 | orchestrator | Monday 06 April 2026 04:14:05 +0000 (0:00:04.485) 0:01:00.398 ********** 2026-04-06 04:16:17.976703 | orchestrator | changed: [localhost] 2026-04-06 04:16:17.976708 | orchestrator | 2026-04-06 04:16:17.976714 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-06 04:16:17.976719 | orchestrator | Monday 06 April 2026 04:14:09 +0000 (0:00:04.471) 0:01:04.869 ********** 2026-04-06 04:16:17.976725 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-06 04:16:17.976731 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-06 04:16:17.976736 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-06 04:16:17.976741 | orchestrator | 2026-04-06 04:16:17.976749 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-06 04:16:17.976763 | orchestrator | Monday 06 April 2026 04:14:24 +0000 (0:00:14.655) 0:01:19.525 ********** 2026-04-06 04:16:17.976770 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-06 04:16:17.976777 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-06 04:16:17.976783 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-06 04:16:17.976790 | orchestrator | 2026-04-06 04:16:17.976797 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-06 04:16:17.976803 | orchestrator | Monday 06 April 2026 04:14:41 +0000 (0:00:17.081) 0:01:36.606 ********** 2026-04-06 04:16:17.976810 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-06 04:16:17.976822 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-06 04:16:17.976830 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-06 04:16:17.976837 | orchestrator | 2026-04-06 04:16:17.976844 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-06 04:16:17.976851 | orchestrator | 2026-04-06 04:16:17.976857 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-06 04:16:17.976881 | orchestrator | Monday 06 April 2026 04:15:15 +0000 (0:00:33.630) 0:02:10.237 ********** 2026-04-06 04:16:17.976888 | orchestrator | ok: [localhost] 2026-04-06 04:16:17.976896 | orchestrator | 2026-04-06 04:16:17.976903 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-06 04:16:17.976910 | orchestrator | Monday 06 April 2026 04:15:19 +0000 (0:00:04.238) 0:02:14.475 ********** 2026-04-06 04:16:17.976917 | orchestrator | skipping: [localhost] 2026-04-06 04:16:17.976924 | orchestrator | 2026-04-06 04:16:17.976930 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-06 04:16:17.976936 | orchestrator | Monday 06 April 2026 04:15:19 +0000 (0:00:00.059) 0:02:14.535 ********** 2026-04-06 04:16:17.976941 | orchestrator | skipping: [localhost] 2026-04-06 04:16:17.976947 | orchestrator | 2026-04-06 04:16:17.976954 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-06 04:16:17.976960 | orchestrator | Monday 06 April 2026 04:15:19 +0000 (0:00:00.051) 0:02:14.586 ********** 2026-04-06 04:16:17.976967 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-06 04:16:17.976974 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-06 04:16:17.976981 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-06 04:16:17.976988 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-06 04:16:17.976995 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-06 04:16:17.977001 | orchestrator | skipping: [localhost] 2026-04-06 04:16:17.977008 | orchestrator | 2026-04-06 04:16:17.977014 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-06 04:16:17.977020 | orchestrator | Monday 06 April 2026 04:15:19 +0000 (0:00:00.179) 0:02:14.765 ********** 2026-04-06 04:16:17.977026 | orchestrator | skipping: [localhost] 2026-04-06 04:16:17.977032 | orchestrator | 2026-04-06 04:16:17.977038 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-06 04:16:17.977045 | orchestrator | Monday 06 April 2026 04:15:20 +0000 (0:00:00.162) 0:02:14.928 ********** 2026-04-06 04:16:17.977051 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-06 04:16:17.977057 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-06 04:16:17.977063 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-06 04:16:17.977069 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-06 04:16:17.977082 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-06 04:16:17.977089 | orchestrator | 2026-04-06 04:16:17.977096 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-06 04:16:17.977102 | orchestrator | Monday 06 April 2026 04:15:25 +0000 (0:00:05.228) 0:02:20.157 ********** 2026-04-06 04:16:17.977109 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-06 04:16:17.977117 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-06 04:16:17.977124 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-06 04:16:17.977131 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-06 04:16:17.977141 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j143567061557.3777', 'results_file': '/ansible/.ansible_async/j143567061557.3777', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:16:17.977151 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j826658360489.3802', 'results_file': '/ansible/.ansible_async/j826658360489.3802', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:16:17.977158 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j345666938650.3827', 'results_file': '/ansible/.ansible_async/j345666938650.3827', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:16:17.977165 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j44752915742.3852', 'results_file': '/ansible/.ansible_async/j44752915742.3852', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:16:17.977174 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j201360791800.3877', 'results_file': '/ansible/.ansible_async/j201360791800.3877', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:16:17.977181 | orchestrator | 2026-04-06 04:16:17.977187 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-06 04:16:17.977194 | orchestrator | Monday 06 April 2026 04:16:12 +0000 (0:00:47.509) 0:03:07.667 ********** 2026-04-06 04:16:17.977200 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-06 04:16:17.977211 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-06 04:17:31.485087 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-06 04:17:31.485165 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-06 04:17:31.485173 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-06 04:17:31.485179 | orchestrator | 2026-04-06 04:17:31.485184 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-06 04:17:31.485189 | orchestrator | Monday 06 April 2026 04:16:17 +0000 (0:00:05.185) 0:03:12.852 ********** 2026-04-06 04:17:31.485194 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-06 04:17:31.485201 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j236290625013.3980', 'results_file': '/ansible/.ansible_async/j236290625013.3980', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:17:31.485207 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j926068397459.4005', 'results_file': '/ansible/.ansible_async/j926068397459.4005', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:17:31.485254 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j19896315158.4030', 'results_file': '/ansible/.ansible_async/j19896315158.4030', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:17:31.485259 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j738417771376.4055', 'results_file': '/ansible/.ansible_async/j738417771376.4055', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:17:31.485264 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j558664360600.4080', 'results_file': '/ansible/.ansible_async/j558664360600.4080', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:17:31.485268 | orchestrator | 2026-04-06 04:17:31.485273 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-06 04:17:31.485277 | orchestrator | Monday 06 April 2026 04:16:27 +0000 (0:00:09.883) 0:03:22.735 ********** 2026-04-06 04:17:31.485281 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-06 04:17:31.485285 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-06 04:17:31.485289 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-06 04:17:31.485294 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-06 04:17:31.485298 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-06 04:17:31.485302 | orchestrator | 2026-04-06 04:17:31.485307 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-06 04:17:31.485311 | orchestrator | Monday 06 April 2026 04:16:33 +0000 (0:00:05.602) 0:03:28.337 ********** 2026-04-06 04:17:31.485315 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-06 04:17:31.485320 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j370789498864.4149', 'results_file': '/ansible/.ansible_async/j370789498864.4149', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:17:31.485324 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j743616579010.4174', 'results_file': '/ansible/.ansible_async/j743616579010.4174', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:17:31.485329 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j120477467045.4200', 'results_file': '/ansible/.ansible_async/j120477467045.4200', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:17:31.485342 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j577820739998.4232', 'results_file': '/ansible/.ansible_async/j577820739998.4232', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:17:31.485358 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j166714281908.4258', 'results_file': '/ansible/.ansible_async/j166714281908.4258', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-06 04:17:31.485362 | orchestrator | 2026-04-06 04:17:31.485367 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-06 04:17:31.485371 | orchestrator | Monday 06 April 2026 04:16:43 +0000 (0:00:10.387) 0:03:38.725 ********** 2026-04-06 04:17:31.485375 | orchestrator | changed: [localhost] 2026-04-06 04:17:31.485385 | orchestrator | 2026-04-06 04:17:31.485390 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-06 04:17:31.485394 | orchestrator | Monday 06 April 2026 04:16:50 +0000 (0:00:06.644) 0:03:45.369 ********** 2026-04-06 04:17:31.485398 | orchestrator | changed: [localhost] 2026-04-06 04:17:31.485402 | orchestrator | 2026-04-06 04:17:31.485406 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-06 04:17:31.485411 | orchestrator | Monday 06 April 2026 04:17:04 +0000 (0:00:14.191) 0:03:59.560 ********** 2026-04-06 04:17:31.485415 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-06 04:17:31.485420 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-06 04:17:31.485424 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-06 04:17:31.485428 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-06 04:17:31.485432 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-06 04:17:31.485436 | orchestrator | 2026-04-06 04:17:31.485441 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-06 04:17:31.485445 | orchestrator | Monday 06 April 2026 04:17:31 +0000 (0:00:26.348) 0:04:25.909 ********** 2026-04-06 04:17:31.485449 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-06 04:17:31.485453 | orchestrator |  "msg": "test: 192.168.112.198" 2026-04-06 04:17:31.485458 | orchestrator | } 2026-04-06 04:17:31.485463 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-06 04:17:31.485468 | orchestrator |  "msg": "test-1: 192.168.112.104" 2026-04-06 04:17:31.485472 | orchestrator | } 2026-04-06 04:17:31.485476 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-06 04:17:31.485480 | orchestrator |  "msg": "test-2: 192.168.112.166" 2026-04-06 04:17:31.485485 | orchestrator | } 2026-04-06 04:17:31.485489 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-06 04:17:31.485493 | orchestrator |  "msg": "test-3: 192.168.112.192" 2026-04-06 04:17:31.485497 | orchestrator | } 2026-04-06 04:17:31.485501 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-06 04:17:31.485505 | orchestrator |  "msg": "test-4: 192.168.112.199" 2026-04-06 04:17:31.485510 | orchestrator | } 2026-04-06 04:17:31.485514 | orchestrator | 2026-04-06 04:17:31.485518 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:17:31.485523 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 04:17:31.485528 | orchestrator | 2026-04-06 04:17:31.485533 | orchestrator | 2026-04-06 04:17:31.485537 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:17:31.485541 | orchestrator | Monday 06 April 2026 04:17:31 +0000 (0:00:00.135) 0:04:26.044 ********** 2026-04-06 04:17:31.485545 | orchestrator | =============================================================================== 2026-04-06 04:17:31.485549 | orchestrator | Wait for instance creation to complete --------------------------------- 47.51s 2026-04-06 04:17:31.485554 | orchestrator | Create test routers ---------------------------------------------------- 33.63s 2026-04-06 04:17:31.485558 | orchestrator | Create floating ip addresses ------------------------------------------- 26.35s 2026-04-06 04:17:31.485562 | orchestrator | Create test subnets ---------------------------------------------------- 17.08s 2026-04-06 04:17:31.485566 | orchestrator | Create test networks --------------------------------------------------- 14.66s 2026-04-06 04:17:31.485571 | orchestrator | Attach test volume ----------------------------------------------------- 14.19s 2026-04-06 04:17:31.485575 | orchestrator | Add member roles to user test ------------------------------------------ 12.41s 2026-04-06 04:17:31.485579 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.39s 2026-04-06 04:17:31.485583 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.88s 2026-04-06 04:17:31.485587 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.12s 2026-04-06 04:17:31.485595 | orchestrator | Create test volume ------------------------------------------------------ 6.64s 2026-04-06 04:17:31.485599 | orchestrator | Add tag to instances ---------------------------------------------------- 5.60s 2026-04-06 04:17:31.485603 | orchestrator | Create ssh security group ----------------------------------------------- 5.24s 2026-04-06 04:17:31.485608 | orchestrator | Create test instances --------------------------------------------------- 5.23s 2026-04-06 04:17:31.485612 | orchestrator | Add metadata to instances ----------------------------------------------- 5.19s 2026-04-06 04:17:31.485616 | orchestrator | Create test server group ------------------------------------------------ 4.71s 2026-04-06 04:17:31.485620 | orchestrator | Create test user -------------------------------------------------------- 4.62s 2026-04-06 04:17:31.485624 | orchestrator | Create test-admin user -------------------------------------------------- 4.57s 2026-04-06 04:17:31.485628 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.54s 2026-04-06 04:17:31.485635 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.49s 2026-04-06 04:17:31.858094 | orchestrator | + server_list 2026-04-06 04:17:31.858180 | orchestrator | + openstack --os-cloud test server list 2026-04-06 04:17:36.087204 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-06 04:17:36.087299 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-06 04:17:36.087313 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-06 04:17:36.087324 | orchestrator | | 1ffb2222-76b0-4c33-9cb9-cccac50cb77d | test-4 | ACTIVE | test-3=192.168.112.199, 192.168.202.235 | N/A (booted from volume) | SCS-1L-1 | 2026-04-06 04:17:36.087334 | orchestrator | | 0b831986-3c20-42b1-9723-3d5a676521b6 | test-3 | ACTIVE | test-2=192.168.112.192, 192.168.201.192 | N/A (booted from volume) | SCS-1L-1 | 2026-04-06 04:17:36.087344 | orchestrator | | 0f709c30-1ea2-4513-8aef-595d77244c64 | test-1 | ACTIVE | test-1=192.168.112.104, 192.168.200.92 | N/A (booted from volume) | SCS-1L-1 | 2026-04-06 04:17:36.087353 | orchestrator | | 75b3ae9e-45d4-4502-9b59-90c56e5692aa | test-2 | ACTIVE | test-2=192.168.112.166, 192.168.201.198 | N/A (booted from volume) | SCS-1L-1 | 2026-04-06 04:17:36.087363 | orchestrator | | 796def20-ed7f-4340-916d-2b9955f332ee | test | ACTIVE | test-1=192.168.112.198, 192.168.200.201 | N/A (booted from volume) | SCS-1L-1 | 2026-04-06 04:17:36.087373 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-06 04:17:36.470331 | orchestrator | + openstack --os-cloud test server show test 2026-04-06 04:17:40.315265 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:40.315364 | orchestrator | | Field | Value | 2026-04-06 04:17:40.315379 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:40.315447 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-06 04:17:40.315469 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-06 04:17:40.315483 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-06 04:17:40.315498 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-06 04:17:40.315513 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-06 04:17:40.315526 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-06 04:17:40.315562 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-06 04:17:40.315579 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-06 04:17:40.315595 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-06 04:17:40.315609 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-06 04:17:40.315644 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-06 04:17:40.315698 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-06 04:17:40.315713 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-06 04:17:40.315734 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-06 04:17:40.315783 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-06 04:17:40.315801 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-06T04:15:56.000000 | 2026-04-06 04:17:40.315836 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-06 04:17:40.315854 | orchestrator | | accessIPv4 | | 2026-04-06 04:17:40.315869 | orchestrator | | accessIPv6 | | 2026-04-06 04:17:40.315918 | orchestrator | | addresses | test-1=192.168.112.198, 192.168.200.201 | 2026-04-06 04:17:40.315936 | orchestrator | | config_drive | | 2026-04-06 04:17:40.315953 | orchestrator | | created | 2026-04-06T04:15:29Z | 2026-04-06 04:17:40.315970 | orchestrator | | description | None | 2026-04-06 04:17:40.315993 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-06 04:17:40.316011 | orchestrator | | hostId | 21957355140bcc184b1c12f902990c51adb9a73ef43b89b8497ef939 | 2026-04-06 04:17:40.316027 | orchestrator | | host_status | None | 2026-04-06 04:17:40.316056 | orchestrator | | id | 796def20-ed7f-4340-916d-2b9955f332ee | 2026-04-06 04:17:40.316074 | orchestrator | | image | N/A (booted from volume) | 2026-04-06 04:17:40.316136 | orchestrator | | key_name | test | 2026-04-06 04:17:40.316152 | orchestrator | | locked | False | 2026-04-06 04:17:40.316167 | orchestrator | | locked_reason | None | 2026-04-06 04:17:40.316180 | orchestrator | | name | test | 2026-04-06 04:17:40.316193 | orchestrator | | pinned_availability_zone | None | 2026-04-06 04:17:40.316215 | orchestrator | | progress | 0 | 2026-04-06 04:17:40.316231 | orchestrator | | project_id | b933bc95b8d74bedbf85f7b32e53eaa4 | 2026-04-06 04:17:40.316246 | orchestrator | | properties | hostname='test' | 2026-04-06 04:17:40.316270 | orchestrator | | security_groups | name='ssh' | 2026-04-06 04:17:40.316295 | orchestrator | | | name='icmp' | 2026-04-06 04:17:40.316311 | orchestrator | | server_groups | None | 2026-04-06 04:17:40.316326 | orchestrator | | status | ACTIVE | 2026-04-06 04:17:40.316341 | orchestrator | | tags | test | 2026-04-06 04:17:40.316355 | orchestrator | | trusted_image_certificates | None | 2026-04-06 04:17:40.316364 | orchestrator | | updated | 2026-04-06T04:16:19Z | 2026-04-06 04:17:40.316398 | orchestrator | | user_id | 4176df0ad4d04ae4ba2deebeae721468 | 2026-04-06 04:17:40.316419 | orchestrator | | volumes_attached | delete_on_termination='True', id='55ed4f5b-9558-4efd-891b-02394bcf9221' | 2026-04-06 04:17:40.316437 | orchestrator | | | delete_on_termination='False', id='9703a467-a300-447e-a3d7-87eaf6ab1d29' | 2026-04-06 04:17:40.323239 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:40.670056 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-06 04:17:43.883527 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:43.883595 | orchestrator | | Field | Value | 2026-04-06 04:17:43.883602 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:43.883607 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-06 04:17:43.883611 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-06 04:17:43.883631 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-06 04:17:43.883636 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-06 04:17:43.883640 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-06 04:17:43.883644 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-06 04:17:43.883718 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-06 04:17:43.883726 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-06 04:17:43.883732 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-06 04:17:43.883738 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-06 04:17:43.883744 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-06 04:17:43.883750 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-06 04:17:43.883757 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-06 04:17:43.883763 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-06 04:17:43.883770 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-06 04:17:43.883782 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-06T04:15:56.000000 | 2026-04-06 04:17:43.883790 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-06 04:17:43.883795 | orchestrator | | accessIPv4 | | 2026-04-06 04:17:43.883798 | orchestrator | | accessIPv6 | | 2026-04-06 04:17:43.883802 | orchestrator | | addresses | test-1=192.168.112.104, 192.168.200.92 | 2026-04-06 04:17:43.883806 | orchestrator | | config_drive | | 2026-04-06 04:17:43.883823 | orchestrator | | created | 2026-04-06T04:15:31Z | 2026-04-06 04:17:43.883835 | orchestrator | | description | None | 2026-04-06 04:17:43.883840 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-06 04:17:43.883847 | orchestrator | | hostId | 21957355140bcc184b1c12f902990c51adb9a73ef43b89b8497ef939 | 2026-04-06 04:17:43.883851 | orchestrator | | host_status | None | 2026-04-06 04:17:43.883859 | orchestrator | | id | 0f709c30-1ea2-4513-8aef-595d77244c64 | 2026-04-06 04:17:43.883863 | orchestrator | | image | N/A (booted from volume) | 2026-04-06 04:17:43.883867 | orchestrator | | key_name | test | 2026-04-06 04:17:43.883871 | orchestrator | | locked | False | 2026-04-06 04:17:43.883874 | orchestrator | | locked_reason | None | 2026-04-06 04:17:43.883878 | orchestrator | | name | test-1 | 2026-04-06 04:17:43.883885 | orchestrator | | pinned_availability_zone | None | 2026-04-06 04:17:43.883899 | orchestrator | | progress | 0 | 2026-04-06 04:17:43.883903 | orchestrator | | project_id | b933bc95b8d74bedbf85f7b32e53eaa4 | 2026-04-06 04:17:43.883907 | orchestrator | | properties | hostname='test-1' | 2026-04-06 04:17:43.883914 | orchestrator | | security_groups | name='ssh' | 2026-04-06 04:17:43.883918 | orchestrator | | | name='icmp' | 2026-04-06 04:17:43.883922 | orchestrator | | server_groups | None | 2026-04-06 04:17:43.883926 | orchestrator | | status | ACTIVE | 2026-04-06 04:17:43.883930 | orchestrator | | tags | test | 2026-04-06 04:17:43.883934 | orchestrator | | trusted_image_certificates | None | 2026-04-06 04:17:43.883946 | orchestrator | | updated | 2026-04-06T04:16:20Z | 2026-04-06 04:17:43.883953 | orchestrator | | user_id | 4176df0ad4d04ae4ba2deebeae721468 | 2026-04-06 04:17:43.883957 | orchestrator | | volumes_attached | delete_on_termination='True', id='012b0a6e-bcf6-4309-9489-869d9b9c8452' | 2026-04-06 04:17:43.889120 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:44.230081 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-06 04:17:47.499276 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:47.499398 | orchestrator | | Field | Value | 2026-04-06 04:17:47.499422 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:47.499440 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-06 04:17:47.499457 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-06 04:17:47.499474 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-06 04:17:47.499539 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-06 04:17:47.499558 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-06 04:17:47.499571 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-06 04:17:47.499601 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-06 04:17:47.499612 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-06 04:17:47.499623 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-06 04:17:47.499633 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-06 04:17:47.499643 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-06 04:17:47.499707 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-06 04:17:47.499729 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-06 04:17:47.499746 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-06 04:17:47.499756 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-06 04:17:47.499766 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-06T04:15:57.000000 | 2026-04-06 04:17:47.499784 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-06 04:17:47.499794 | orchestrator | | accessIPv4 | | 2026-04-06 04:17:47.499804 | orchestrator | | accessIPv6 | | 2026-04-06 04:17:47.499814 | orchestrator | | addresses | test-2=192.168.112.166, 192.168.201.198 | 2026-04-06 04:17:47.499824 | orchestrator | | config_drive | | 2026-04-06 04:17:47.499841 | orchestrator | | created | 2026-04-06T04:15:31Z | 2026-04-06 04:17:47.499851 | orchestrator | | description | None | 2026-04-06 04:17:47.499911 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-06 04:17:47.499923 | orchestrator | | hostId | b301908c463c5194244abbaa337388541f1ff3778f7515d701b9fb7e | 2026-04-06 04:17:47.499933 | orchestrator | | host_status | None | 2026-04-06 04:17:47.499950 | orchestrator | | id | 75b3ae9e-45d4-4502-9b59-90c56e5692aa | 2026-04-06 04:17:47.499961 | orchestrator | | image | N/A (booted from volume) | 2026-04-06 04:17:47.499971 | orchestrator | | key_name | test | 2026-04-06 04:17:47.499981 | orchestrator | | locked | False | 2026-04-06 04:17:47.500014 | orchestrator | | locked_reason | None | 2026-04-06 04:17:47.500025 | orchestrator | | name | test-2 | 2026-04-06 04:17:47.500036 | orchestrator | | pinned_availability_zone | None | 2026-04-06 04:17:47.500046 | orchestrator | | progress | 0 | 2026-04-06 04:17:47.500057 | orchestrator | | project_id | b933bc95b8d74bedbf85f7b32e53eaa4 | 2026-04-06 04:17:47.500067 | orchestrator | | properties | hostname='test-2' | 2026-04-06 04:17:47.500084 | orchestrator | | security_groups | name='ssh' | 2026-04-06 04:17:47.500094 | orchestrator | | | name='icmp' | 2026-04-06 04:17:47.500104 | orchestrator | | server_groups | None | 2026-04-06 04:17:47.500619 | orchestrator | | status | ACTIVE | 2026-04-06 04:17:47.500647 | orchestrator | | tags | test | 2026-04-06 04:17:47.500686 | orchestrator | | trusted_image_certificates | None | 2026-04-06 04:17:47.500698 | orchestrator | | updated | 2026-04-06T04:16:21Z | 2026-04-06 04:17:47.500708 | orchestrator | | user_id | 4176df0ad4d04ae4ba2deebeae721468 | 2026-04-06 04:17:47.500718 | orchestrator | | volumes_attached | delete_on_termination='True', id='483c19c8-8cd9-4655-b886-899f437c0c74' | 2026-04-06 04:17:47.503057 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:47.826561 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-06 04:17:51.019497 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:51.019598 | orchestrator | | Field | Value | 2026-04-06 04:17:51.019614 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:51.019716 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-06 04:17:51.019733 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-06 04:17:51.019744 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-06 04:17:51.019756 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-06 04:17:51.019767 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-06 04:17:51.019778 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-06 04:17:51.019809 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-06 04:17:51.019821 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-06 04:17:51.019832 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-06 04:17:51.019856 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-06 04:17:51.019873 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-06 04:17:51.019885 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-06 04:17:51.019897 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-06 04:17:51.019908 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-06 04:17:51.019919 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-06 04:17:51.019930 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-06T04:15:57.000000 | 2026-04-06 04:17:51.019950 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-06 04:17:51.019962 | orchestrator | | accessIPv4 | | 2026-04-06 04:17:51.019980 | orchestrator | | accessIPv6 | | 2026-04-06 04:17:51.019991 | orchestrator | | addresses | test-2=192.168.112.192, 192.168.201.192 | 2026-04-06 04:17:51.020008 | orchestrator | | config_drive | | 2026-04-06 04:17:51.020020 | orchestrator | | created | 2026-04-06T04:15:32Z | 2026-04-06 04:17:51.020032 | orchestrator | | description | None | 2026-04-06 04:17:51.020046 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-06 04:17:51.020060 | orchestrator | | hostId | b301908c463c5194244abbaa337388541f1ff3778f7515d701b9fb7e | 2026-04-06 04:17:51.020074 | orchestrator | | host_status | None | 2026-04-06 04:17:51.020099 | orchestrator | | id | 0b831986-3c20-42b1-9723-3d5a676521b6 | 2026-04-06 04:17:51.020146 | orchestrator | | image | N/A (booted from volume) | 2026-04-06 04:17:51.020168 | orchestrator | | key_name | test | 2026-04-06 04:17:51.020188 | orchestrator | | locked | False | 2026-04-06 04:17:51.020216 | orchestrator | | locked_reason | None | 2026-04-06 04:17:51.020238 | orchestrator | | name | test-3 | 2026-04-06 04:17:51.020258 | orchestrator | | pinned_availability_zone | None | 2026-04-06 04:17:51.020280 | orchestrator | | progress | 0 | 2026-04-06 04:17:51.020300 | orchestrator | | project_id | b933bc95b8d74bedbf85f7b32e53eaa4 | 2026-04-06 04:17:51.020312 | orchestrator | | properties | hostname='test-3' | 2026-04-06 04:17:51.020332 | orchestrator | | security_groups | name='ssh' | 2026-04-06 04:17:51.020402 | orchestrator | | | name='icmp' | 2026-04-06 04:17:51.020415 | orchestrator | | server_groups | None | 2026-04-06 04:17:51.020427 | orchestrator | | status | ACTIVE | 2026-04-06 04:17:51.020458 | orchestrator | | tags | test | 2026-04-06 04:17:51.020471 | orchestrator | | trusted_image_certificates | None | 2026-04-06 04:17:51.020482 | orchestrator | | updated | 2026-04-06T04:16:21Z | 2026-04-06 04:17:51.020494 | orchestrator | | user_id | 4176df0ad4d04ae4ba2deebeae721468 | 2026-04-06 04:17:51.020505 | orchestrator | | volumes_attached | delete_on_termination='True', id='bcc7956e-4fb8-4f0b-b006-e411b174ebdc' | 2026-04-06 04:17:51.024631 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:51.342204 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-06 04:17:54.681276 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:54.681362 | orchestrator | | Field | Value | 2026-04-06 04:17:54.681372 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:54.681379 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-06 04:17:54.681396 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-06 04:17:54.681403 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-06 04:17:54.681409 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-06 04:17:54.681415 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-06 04:17:54.681421 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-06 04:17:54.681455 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-06 04:17:54.681462 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-06 04:17:54.681469 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-06 04:17:54.681475 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-06 04:17:54.681480 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-06 04:17:54.681487 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-06 04:17:54.681493 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-06 04:17:54.681499 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-06 04:17:54.681505 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-06 04:17:54.681515 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-06T04:15:57.000000 | 2026-04-06 04:17:54.681526 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-06 04:17:54.681584 | orchestrator | | accessIPv4 | | 2026-04-06 04:17:54.681595 | orchestrator | | accessIPv6 | | 2026-04-06 04:17:54.681601 | orchestrator | | addresses | test-3=192.168.112.199, 192.168.202.235 | 2026-04-06 04:17:54.681607 | orchestrator | | config_drive | | 2026-04-06 04:17:54.681616 | orchestrator | | created | 2026-04-06T04:15:33Z | 2026-04-06 04:17:54.681622 | orchestrator | | description | None | 2026-04-06 04:17:54.681628 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-06 04:17:54.681633 | orchestrator | | hostId | b301908c463c5194244abbaa337388541f1ff3778f7515d701b9fb7e | 2026-04-06 04:17:54.681643 | orchestrator | | host_status | None | 2026-04-06 04:17:54.681706 | orchestrator | | id | 1ffb2222-76b0-4c33-9cb9-cccac50cb77d | 2026-04-06 04:17:54.681713 | orchestrator | | image | N/A (booted from volume) | 2026-04-06 04:17:54.681719 | orchestrator | | key_name | test | 2026-04-06 04:17:54.681724 | orchestrator | | locked | False | 2026-04-06 04:17:54.681733 | orchestrator | | locked_reason | None | 2026-04-06 04:17:54.681739 | orchestrator | | name | test-4 | 2026-04-06 04:17:54.681745 | orchestrator | | pinned_availability_zone | None | 2026-04-06 04:17:54.681750 | orchestrator | | progress | 0 | 2026-04-06 04:17:54.681760 | orchestrator | | project_id | b933bc95b8d74bedbf85f7b32e53eaa4 | 2026-04-06 04:17:54.681766 | orchestrator | | properties | hostname='test-4' | 2026-04-06 04:17:54.681777 | orchestrator | | security_groups | name='ssh' | 2026-04-06 04:17:54.681783 | orchestrator | | | name='icmp' | 2026-04-06 04:17:54.681788 | orchestrator | | server_groups | None | 2026-04-06 04:17:54.681794 | orchestrator | | status | ACTIVE | 2026-04-06 04:17:54.681803 | orchestrator | | tags | test | 2026-04-06 04:17:54.681809 | orchestrator | | trusted_image_certificates | None | 2026-04-06 04:17:54.681814 | orchestrator | | updated | 2026-04-06T04:16:22Z | 2026-04-06 04:17:54.681824 | orchestrator | | user_id | 4176df0ad4d04ae4ba2deebeae721468 | 2026-04-06 04:17:54.681830 | orchestrator | | volumes_attached | delete_on_termination='True', id='5b097108-b650-4096-962a-9dbc133776b1' | 2026-04-06 04:17:54.686549 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 04:17:55.010707 | orchestrator | + server_ping 2026-04-06 04:17:55.011241 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-06 04:17:55.011401 | orchestrator | ++ tr -d '\r' 2026-04-06 04:17:58.226950 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-06 04:17:58.227047 | orchestrator | + ping -c3 192.168.112.192 2026-04-06 04:17:58.243572 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-04-06 04:17:58.243716 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=8.11 ms 2026-04-06 04:17:59.240242 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.61 ms 2026-04-06 04:18:00.239989 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.74 ms 2026-04-06 04:18:00.240069 | orchestrator | 2026-04-06 04:18:00.240076 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-04-06 04:18:00.240082 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-06 04:18:00.240087 | orchestrator | rtt min/avg/max/mdev = 1.738/4.149/8.105/2.819 ms 2026-04-06 04:18:00.240569 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-06 04:18:00.240594 | orchestrator | + ping -c3 192.168.112.166 2026-04-06 04:18:00.253749 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2026-04-06 04:18:00.253833 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=8.20 ms 2026-04-06 04:18:01.250001 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=2.70 ms 2026-04-06 04:18:02.252450 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=1.93 ms 2026-04-06 04:18:02.252569 | orchestrator | 2026-04-06 04:18:02.252593 | orchestrator | --- 192.168.112.166 ping statistics --- 2026-04-06 04:18:02.252607 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-06 04:18:02.252620 | orchestrator | rtt min/avg/max/mdev = 1.925/4.274/8.203/2.795 ms 2026-04-06 04:18:02.252636 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-06 04:18:02.252651 | orchestrator | + ping -c3 192.168.112.104 2026-04-06 04:18:02.265919 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2026-04-06 04:18:02.265993 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=8.97 ms 2026-04-06 04:18:03.261050 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.72 ms 2026-04-06 04:18:04.262876 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=1.49 ms 2026-04-06 04:18:04.263012 | orchestrator | 2026-04-06 04:18:04.263031 | orchestrator | --- 192.168.112.104 ping statistics --- 2026-04-06 04:18:04.263073 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-06 04:18:04.263086 | orchestrator | rtt min/avg/max/mdev = 1.489/4.390/8.966/3.274 ms 2026-04-06 04:18:04.263098 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-06 04:18:04.263110 | orchestrator | + ping -c3 192.168.112.199 2026-04-06 04:18:04.271515 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2026-04-06 04:18:04.271618 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=5.68 ms 2026-04-06 04:18:05.270288 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=2.57 ms 2026-04-06 04:18:06.272969 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=2.33 ms 2026-04-06 04:18:06.273055 | orchestrator | 2026-04-06 04:18:06.273068 | orchestrator | --- 192.168.112.199 ping statistics --- 2026-04-06 04:18:06.273078 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-06 04:18:06.273087 | orchestrator | rtt min/avg/max/mdev = 2.329/3.528/5.682/1.526 ms 2026-04-06 04:18:06.273096 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-06 04:18:06.273105 | orchestrator | + ping -c3 192.168.112.198 2026-04-06 04:18:06.284786 | orchestrator | PING 192.168.112.198 (192.168.112.198) 56(84) bytes of data. 2026-04-06 04:18:06.284887 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=1 ttl=63 time=8.27 ms 2026-04-06 04:18:07.280489 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=2 ttl=63 time=2.52 ms 2026-04-06 04:18:08.282347 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=3 ttl=63 time=1.60 ms 2026-04-06 04:18:08.282577 | orchestrator | 2026-04-06 04:18:08.282604 | orchestrator | --- 192.168.112.198 ping statistics --- 2026-04-06 04:18:08.282626 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-06 04:18:08.282645 | orchestrator | rtt min/avg/max/mdev = 1.600/4.128/8.269/2.951 ms 2026-04-06 04:18:08.282783 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-06 04:18:08.698355 | orchestrator | ok: Runtime: 0:09:28.742197 2026-04-06 04:18:08.757220 | 2026-04-06 04:18:08.757377 | TASK [Run tempest] 2026-04-06 04:18:09.293525 | orchestrator | skipping: Conditional result was False 2026-04-06 04:18:09.311349 | 2026-04-06 04:18:09.311499 | TASK [Check prometheus alert status] 2026-04-06 04:18:09.848613 | orchestrator | skipping: Conditional result was False 2026-04-06 04:18:09.863305 | 2026-04-06 04:18:09.863456 | PLAY [Upgrade testbed] 2026-04-06 04:18:09.875104 | 2026-04-06 04:18:09.875225 | TASK [Print next ceph version] 2026-04-06 04:18:09.974736 | orchestrator | ok 2026-04-06 04:18:09.984948 | 2026-04-06 04:18:09.985077 | TASK [Print next openstack version] 2026-04-06 04:18:10.064898 | orchestrator | ok 2026-04-06 04:18:10.076280 | 2026-04-06 04:18:10.076407 | TASK [Print next manager version] 2026-04-06 04:18:10.152957 | orchestrator | ok 2026-04-06 04:18:10.162109 | 2026-04-06 04:18:10.162249 | TASK [Set cloud fact (Zuul deployment)] 2026-04-06 04:18:10.230897 | orchestrator | ok 2026-04-06 04:18:10.244844 | 2026-04-06 04:18:10.245002 | TASK [Set cloud fact (local deployment)] 2026-04-06 04:18:10.280431 | orchestrator | skipping: Conditional result was False 2026-04-06 04:18:10.295214 | 2026-04-06 04:18:10.295367 | TASK [Fetch manager address] 2026-04-06 04:18:10.600852 | orchestrator | ok 2026-04-06 04:18:10.613658 | 2026-04-06 04:18:10.613884 | TASK [Set manager_host address] 2026-04-06 04:18:10.689912 | orchestrator | ok 2026-04-06 04:18:10.701150 | 2026-04-06 04:18:10.701290 | TASK [Run upgrade] 2026-04-06 04:18:11.400029 | orchestrator | + set -e 2026-04-06 04:18:11.400191 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-06 04:18:11.400211 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-06 04:18:11.400221 | orchestrator | + CEPH_VERSION=reef 2026-04-06 04:18:11.400230 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-06 04:18:11.400239 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-06 04:18:11.400248 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0 reef 2024.2 kolla/release' 2026-04-06 04:18:11.407955 | orchestrator | + set -e 2026-04-06 04:18:11.408073 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 04:18:11.408093 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 04:18:11.408112 | orchestrator | ++ INTERACTIVE=false 2026-04-06 04:18:11.408122 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 04:18:11.408136 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 04:18:11.409383 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-04-06 04:18:11.447643 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-04-06 04:18:11.448875 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-06 04:18:11.489424 | orchestrator | 2026-04-06 04:18:11.489511 | orchestrator | # UPGRADE MANAGER 2026-04-06 04:18:11.489526 | orchestrator | 2026-04-06 04:18:11.489533 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-04-06 04:18:11.489542 | orchestrator | + echo 2026-04-06 04:18:11.489551 | orchestrator | + echo '# UPGRADE MANAGER' 2026-04-06 04:18:11.489558 | orchestrator | + echo 2026-04-06 04:18:11.489564 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-06 04:18:11.489571 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-06 04:18:11.489578 | orchestrator | + CEPH_VERSION=reef 2026-04-06 04:18:11.489585 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-06 04:18:11.489592 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-06 04:18:11.489599 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-06 04:18:11.494944 | orchestrator | + set -e 2026-04-06 04:18:11.495012 | orchestrator | + VERSION=10.0.0 2026-04-06 04:18:11.495019 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-06 04:18:11.500358 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-06 04:18:11.500454 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-06 04:18:11.504769 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-06 04:18:11.509068 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-06 04:18:11.517335 | orchestrator | /opt/configuration ~ 2026-04-06 04:18:11.517420 | orchestrator | + set -e 2026-04-06 04:18:11.517429 | orchestrator | + pushd /opt/configuration 2026-04-06 04:18:11.517436 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-06 04:18:11.517444 | orchestrator | + source /opt/venv/bin/activate 2026-04-06 04:18:11.518484 | orchestrator | ++ deactivate nondestructive 2026-04-06 04:18:11.518553 | orchestrator | ++ '[' -n '' ']' 2026-04-06 04:18:11.518564 | orchestrator | ++ '[' -n '' ']' 2026-04-06 04:18:11.518573 | orchestrator | ++ hash -r 2026-04-06 04:18:11.518582 | orchestrator | ++ '[' -n '' ']' 2026-04-06 04:18:11.518590 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-06 04:18:11.518599 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-06 04:18:11.518608 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-06 04:18:11.518642 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-06 04:18:11.518651 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-06 04:18:11.518711 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-06 04:18:11.518720 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-06 04:18:11.518738 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 04:18:11.518748 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 04:18:11.518756 | orchestrator | ++ export PATH 2026-04-06 04:18:11.518764 | orchestrator | ++ '[' -n '' ']' 2026-04-06 04:18:11.518772 | orchestrator | ++ '[' -z '' ']' 2026-04-06 04:18:11.518780 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-06 04:18:11.518788 | orchestrator | ++ PS1='(venv) ' 2026-04-06 04:18:11.518796 | orchestrator | ++ export PS1 2026-04-06 04:18:11.518802 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-06 04:18:11.518807 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-06 04:18:11.518812 | orchestrator | ++ hash -r 2026-04-06 04:18:11.518849 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-06 04:18:12.743585 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-06 04:18:12.744993 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-06 04:18:12.746626 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-06 04:18:12.748132 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-06 04:18:12.749490 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-06 04:18:12.760304 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-06 04:18:12.762944 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-06 04:18:12.763002 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-06 04:18:12.764866 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-06 04:18:12.806213 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-06 04:18:12.807737 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-06 04:18:12.809816 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-06 04:18:12.811326 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-06 04:18:12.815261 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-06 04:18:13.122407 | orchestrator | ++ which gilt 2026-04-06 04:18:13.124738 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-06 04:18:13.124807 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-06 04:18:13.384557 | orchestrator | osism.cfg-generics: 2026-04-06 04:18:13.484845 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-06 04:18:13.485524 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-06 04:18:13.487678 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-06 04:18:13.487754 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-06 04:18:14.615723 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-06 04:18:14.632351 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-06 04:18:15.081252 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-06 04:18:15.150282 | orchestrator | ~ 2026-04-06 04:18:15.150416 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-06 04:18:15.150445 | orchestrator | + deactivate 2026-04-06 04:18:15.150464 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-06 04:18:15.150485 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 04:18:15.150504 | orchestrator | + export PATH 2026-04-06 04:18:15.150522 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-06 04:18:15.150540 | orchestrator | + '[' -n '' ']' 2026-04-06 04:18:15.150558 | orchestrator | + hash -r 2026-04-06 04:18:15.150577 | orchestrator | + '[' -n '' ']' 2026-04-06 04:18:15.150596 | orchestrator | + unset VIRTUAL_ENV 2026-04-06 04:18:15.150612 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-06 04:18:15.150629 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-06 04:18:15.150647 | orchestrator | + unset -f deactivate 2026-04-06 04:18:15.150703 | orchestrator | + popd 2026-04-06 04:18:15.151720 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-06 04:18:15.151846 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-06 04:18:15.159487 | orchestrator | + set -e 2026-04-06 04:18:15.159839 | orchestrator | + NAMESPACE=kolla/release 2026-04-06 04:18:15.159866 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-06 04:18:15.167144 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-06 04:18:15.174281 | orchestrator | /opt/configuration ~ 2026-04-06 04:18:15.174355 | orchestrator | + set -e 2026-04-06 04:18:15.174364 | orchestrator | + pushd /opt/configuration 2026-04-06 04:18:15.174371 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-06 04:18:15.174377 | orchestrator | + source /opt/venv/bin/activate 2026-04-06 04:18:15.174392 | orchestrator | ++ deactivate nondestructive 2026-04-06 04:18:15.174399 | orchestrator | ++ '[' -n '' ']' 2026-04-06 04:18:15.174405 | orchestrator | ++ '[' -n '' ']' 2026-04-06 04:18:15.174411 | orchestrator | ++ hash -r 2026-04-06 04:18:15.174417 | orchestrator | ++ '[' -n '' ']' 2026-04-06 04:18:15.174423 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-06 04:18:15.174429 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-06 04:18:15.174435 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-06 04:18:15.174441 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-06 04:18:15.174447 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-06 04:18:15.174453 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-06 04:18:15.174535 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-06 04:18:15.174678 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 04:18:15.174690 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 04:18:15.174696 | orchestrator | ++ export PATH 2026-04-06 04:18:15.174702 | orchestrator | ++ '[' -n '' ']' 2026-04-06 04:18:15.174708 | orchestrator | ++ '[' -z '' ']' 2026-04-06 04:18:15.174714 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-06 04:18:15.174719 | orchestrator | ++ PS1='(venv) ' 2026-04-06 04:18:15.174725 | orchestrator | ++ export PS1 2026-04-06 04:18:15.174731 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-06 04:18:15.174737 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-06 04:18:15.174743 | orchestrator | ++ hash -r 2026-04-06 04:18:15.174749 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-06 04:18:15.796815 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-06 04:18:15.801513 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-06 04:18:15.801557 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-06 04:18:15.801597 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-06 04:18:15.801634 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-04-06 04:18:15.808253 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-06 04:18:15.809913 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-06 04:18:15.811540 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-06 04:18:15.812801 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-06 04:18:15.853158 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-06 04:18:15.854271 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-06 04:18:15.856280 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-06 04:18:15.857691 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-06 04:18:15.861532 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-06 04:18:16.093859 | orchestrator | ++ which gilt 2026-04-06 04:18:16.096005 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-06 04:18:16.096063 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-06 04:18:16.302148 | orchestrator | osism.cfg-generics: 2026-04-06 04:18:16.394885 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-06 04:18:16.395112 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-06 04:18:16.395561 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-06 04:18:16.395815 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-06 04:18:17.071577 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-06 04:18:17.082403 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-06 04:18:17.507806 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-06 04:18:17.572233 | orchestrator | ~ 2026-04-06 04:18:17.572355 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-06 04:18:17.572368 | orchestrator | + deactivate 2026-04-06 04:18:17.572377 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-06 04:18:17.572388 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-06 04:18:17.572396 | orchestrator | + export PATH 2026-04-06 04:18:17.572405 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-06 04:18:17.572417 | orchestrator | + '[' -n '' ']' 2026-04-06 04:18:17.572429 | orchestrator | + hash -r 2026-04-06 04:18:17.572441 | orchestrator | + '[' -n '' ']' 2026-04-06 04:18:17.572452 | orchestrator | + unset VIRTUAL_ENV 2026-04-06 04:18:17.572464 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-06 04:18:17.572477 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-06 04:18:17.572489 | orchestrator | + unset -f deactivate 2026-04-06 04:18:17.572501 | orchestrator | + popd 2026-04-06 04:18:17.574249 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-04-06 04:18:17.636126 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-06 04:18:17.636935 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-06 04:18:17.715613 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 04:18:17.715695 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-06 04:18:17.719264 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-06 04:18:17.725324 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-04-06 04:18:17.789867 | orchestrator | ++ '[' -1 -le 0 ']' 2026-04-06 04:18:17.790638 | orchestrator | +++ semver 10.0.0 10.0.0-0 2026-04-06 04:18:17.873762 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-04-06 04:18:17.873908 | orchestrator | ++ echo true 2026-04-06 04:18:17.873922 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-04-06 04:18:17.875639 | orchestrator | +++ semver 2024.2 2024.2 2026-04-06 04:18:17.959824 | orchestrator | ++ '[' 0 -le 0 ']' 2026-04-06 04:18:17.960151 | orchestrator | +++ semver 2024.2 2025.1 2026-04-06 04:18:18.027146 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-04-06 04:18:18.027246 | orchestrator | ++ echo false 2026-04-06 04:18:18.028056 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-04-06 04:18:18.028120 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-06 04:18:18.028131 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-04-06 04:18:18.028138 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-04-06 04:18:18.028145 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-04-06 04:18:18.032437 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-04-06 04:18:18.032576 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-04-06 04:18:18.048396 | orchestrator | export RABBITMQ3TO4=true 2026-04-06 04:18:18.051922 | orchestrator | + osism update manager 2026-04-06 04:18:24.252622 | orchestrator | Collecting uv 2026-04-06 04:18:24.349569 | orchestrator | Downloading uv-0.11.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-04-06 04:18:24.367313 | orchestrator | Downloading uv-0.11.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.6 MB) 2026-04-06 04:18:25.646155 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.6/24.6 MB 19.8 MB/s eta 0:00:00 2026-04-06 04:18:25.715999 | orchestrator | Installing collected packages: uv 2026-04-06 04:18:26.197175 | orchestrator | Successfully installed uv-0.11.3 2026-04-06 04:18:27.004528 | orchestrator | Resolved 11 packages in 407ms 2026-04-06 04:18:27.037508 | orchestrator | Downloading cryptography (4.3MiB) 2026-04-06 04:18:27.037633 | orchestrator | Downloading netaddr (2.2MiB) 2026-04-06 04:18:27.037649 | orchestrator | Downloading ansible (54.5MiB) 2026-04-06 04:18:27.037848 | orchestrator | Downloading ansible-core (2.1MiB) 2026-04-06 04:18:27.395049 | orchestrator | Downloaded netaddr 2026-04-06 04:18:27.506211 | orchestrator | Downloaded cryptography 2026-04-06 04:18:27.570485 | orchestrator | Downloaded ansible-core 2026-04-06 04:18:35.054439 | orchestrator | Downloaded ansible 2026-04-06 04:18:35.054719 | orchestrator | Prepared 11 packages in 8.05s 2026-04-06 04:18:35.639627 | orchestrator | Installed 11 packages in 584ms 2026-04-06 04:18:35.639723 | orchestrator | + ansible==11.11.0 2026-04-06 04:18:35.639733 | orchestrator | + ansible-core==2.18.15 2026-04-06 04:18:35.639741 | orchestrator | + cffi==2.0.0 2026-04-06 04:18:35.639748 | orchestrator | + cryptography==46.0.6 2026-04-06 04:18:35.639755 | orchestrator | + jinja2==3.1.6 2026-04-06 04:18:35.639762 | orchestrator | + markupsafe==3.0.3 2026-04-06 04:18:35.639768 | orchestrator | + netaddr==1.3.0 2026-04-06 04:18:35.639775 | orchestrator | + packaging==26.0 2026-04-06 04:18:35.639781 | orchestrator | + pycparser==3.0 2026-04-06 04:18:35.639790 | orchestrator | + pyyaml==6.0.3 2026-04-06 04:18:35.639804 | orchestrator | + resolvelib==1.0.1 2026-04-06 04:18:36.862904 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-204251i86gqfpp/tmpph_2ipxo/ansible-collection-services_97491ru'... 2026-04-06 04:18:38.384601 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-06 04:18:38.384752 | orchestrator | Already on 'main' 2026-04-06 04:18:38.898280 | orchestrator | Starting galaxy collection install process 2026-04-06 04:18:38.898364 | orchestrator | Process install dependency map 2026-04-06 04:18:38.898377 | orchestrator | Starting collection install process 2026-04-06 04:18:38.898389 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-04-06 04:18:38.898400 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-04-06 04:18:38.898410 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-06 04:18:39.467160 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-204296yft0naau/tmpouw7visj/ansible-playbooks-managery_mwl9lk'... 2026-04-06 04:18:41.917953 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-06 04:18:41.918093 | orchestrator | Already on 'main' 2026-04-06 04:18:42.223076 | orchestrator | Starting galaxy collection install process 2026-04-06 04:18:42.223148 | orchestrator | Process install dependency map 2026-04-06 04:18:42.223157 | orchestrator | Starting collection install process 2026-04-06 04:18:42.223164 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-04-06 04:18:42.223171 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-04-06 04:18:42.223184 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-04-06 04:18:42.961451 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-06 04:18:42.961549 | orchestrator | -vvvv to see details 2026-04-06 04:18:43.447974 | orchestrator | 2026-04-06 04:18:43.448072 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-04-06 04:18:43.448087 | orchestrator | 2026-04-06 04:18:43.448120 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-06 04:18:47.756754 | orchestrator | ok: [testbed-manager] 2026-04-06 04:18:47.756833 | orchestrator | 2026-04-06 04:18:47.756841 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-06 04:18:47.835503 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-06 04:18:47.835630 | orchestrator | 2026-04-06 04:18:47.835649 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-06 04:18:49.807465 | orchestrator | ok: [testbed-manager] 2026-04-06 04:18:49.807557 | orchestrator | 2026-04-06 04:18:49.807568 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-06 04:18:49.870287 | orchestrator | ok: [testbed-manager] 2026-04-06 04:18:49.870379 | orchestrator | 2026-04-06 04:18:49.870394 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-06 04:18:49.955241 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-06 04:18:49.955352 | orchestrator | 2026-04-06 04:18:49.955369 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-06 04:18:54.567617 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-04-06 04:18:54.567785 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-04-06 04:18:54.567803 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-06 04:18:54.567829 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-04-06 04:18:54.567841 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-06 04:18:54.567852 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-06 04:18:54.567864 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-06 04:18:54.567875 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-04-06 04:18:54.567886 | orchestrator | 2026-04-06 04:18:54.567899 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-06 04:18:55.705520 | orchestrator | ok: [testbed-manager] 2026-04-06 04:18:55.705640 | orchestrator | 2026-04-06 04:18:55.705735 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-06 04:18:56.782951 | orchestrator | ok: [testbed-manager] 2026-04-06 04:18:56.783083 | orchestrator | 2026-04-06 04:18:56.783111 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-06 04:18:56.901759 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-06 04:18:56.901856 | orchestrator | 2026-04-06 04:18:56.901871 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-06 04:18:58.776509 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-04-06 04:18:58.776610 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-04-06 04:18:58.776625 | orchestrator | 2026-04-06 04:18:58.776645 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-06 04:18:59.825429 | orchestrator | ok: [testbed-manager] 2026-04-06 04:18:59.825501 | orchestrator | 2026-04-06 04:18:59.825508 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-06 04:18:59.894261 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:18:59.894337 | orchestrator | 2026-04-06 04:18:59.894346 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-06 04:18:59.980347 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-06 04:18:59.980453 | orchestrator | 2026-04-06 04:18:59.980469 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-06 04:19:00.980471 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:00.980547 | orchestrator | 2026-04-06 04:19:00.980558 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-06 04:19:01.051451 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-06 04:19:01.051562 | orchestrator | 2026-04-06 04:19:01.051577 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-06 04:19:03.218591 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-06 04:19:03.218707 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-06 04:19:03.218719 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:03.218727 | orchestrator | 2026-04-06 04:19:03.218734 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-06 04:19:04.246177 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:04.246288 | orchestrator | 2026-04-06 04:19:04.246308 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-06 04:19:04.324499 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:19:04.324603 | orchestrator | 2026-04-06 04:19:04.324619 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-06 04:19:04.431933 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-06 04:19:04.432013 | orchestrator | 2026-04-06 04:19:04.432023 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-06 04:19:05.227775 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:05.227883 | orchestrator | 2026-04-06 04:19:05.227899 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-06 04:19:05.818225 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:05.818311 | orchestrator | 2026-04-06 04:19:05.818338 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-06 04:19:07.952055 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-04-06 04:19:07.952149 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-04-06 04:19:07.952161 | orchestrator | 2026-04-06 04:19:07.952171 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-06 04:19:09.340334 | orchestrator | changed: [testbed-manager] 2026-04-06 04:19:09.340409 | orchestrator | 2026-04-06 04:19:09.340416 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-06 04:19:09.899303 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:09.899393 | orchestrator | 2026-04-06 04:19:09.899404 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-06 04:19:10.482707 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:10.482807 | orchestrator | 2026-04-06 04:19:10.482820 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-06 04:19:10.536127 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:19:10.536198 | orchestrator | 2026-04-06 04:19:10.536207 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-06 04:19:10.619047 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-06 04:19:10.619133 | orchestrator | 2026-04-06 04:19:10.619145 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-06 04:19:10.673185 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:10.673294 | orchestrator | 2026-04-06 04:19:10.673313 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-06 04:19:13.646286 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-04-06 04:19:13.646414 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-04-06 04:19:13.646436 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-04-06 04:19:13.646454 | orchestrator | 2026-04-06 04:19:13.646473 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-06 04:19:14.711184 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:14.711325 | orchestrator | 2026-04-06 04:19:14.711346 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-06 04:19:15.873011 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:15.873095 | orchestrator | 2026-04-06 04:19:15.873103 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-06 04:19:16.904644 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:16.904776 | orchestrator | 2026-04-06 04:19:16.904786 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-06 04:19:16.980530 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-06 04:19:16.980660 | orchestrator | 2026-04-06 04:19:16.980724 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-06 04:19:17.050090 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:17.050165 | orchestrator | 2026-04-06 04:19:17.050172 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-06 04:19:18.060485 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-04-06 04:19:18.060606 | orchestrator | 2026-04-06 04:19:18.060621 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-06 04:19:18.161308 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-06 04:19:18.161433 | orchestrator | 2026-04-06 04:19:18.161460 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-06 04:19:19.254602 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:19.254766 | orchestrator | 2026-04-06 04:19:19.254779 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-06 04:19:20.428112 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:20.428195 | orchestrator | 2026-04-06 04:19:20.428205 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-06 04:19:20.494596 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:19:20.494727 | orchestrator | 2026-04-06 04:19:20.494742 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-06 04:19:20.561117 | orchestrator | ok: [testbed-manager] 2026-04-06 04:19:20.561208 | orchestrator | 2026-04-06 04:19:20.561220 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-06 04:19:21.978588 | orchestrator | changed: [testbed-manager] 2026-04-06 04:19:21.978768 | orchestrator | 2026-04-06 04:19:21.978784 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-06 04:20:35.646497 | orchestrator | changed: [testbed-manager] 2026-04-06 04:20:35.646644 | orchestrator | 2026-04-06 04:20:35.646665 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-06 04:20:36.977241 | orchestrator | ok: [testbed-manager] 2026-04-06 04:20:36.977336 | orchestrator | 2026-04-06 04:20:36.977350 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-06 04:20:37.044191 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:20:37.044259 | orchestrator | 2026-04-06 04:20:37.044266 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-06 04:20:37.932459 | orchestrator | ok: [testbed-manager] 2026-04-06 04:20:37.932594 | orchestrator | 2026-04-06 04:20:37.932614 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-06 04:20:38.016152 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:20:38.016273 | orchestrator | 2026-04-06 04:20:38.016291 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-06 04:20:38.016304 | orchestrator | 2026-04-06 04:20:38.016315 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-06 04:20:52.863071 | orchestrator | changed: [testbed-manager] 2026-04-06 04:20:52.863213 | orchestrator | 2026-04-06 04:20:52.863242 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-06 04:21:52.913041 | orchestrator | Pausing for 60 seconds 2026-04-06 04:21:52.913153 | orchestrator | changed: [testbed-manager] 2026-04-06 04:21:52.913168 | orchestrator | 2026-04-06 04:21:52.913176 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-04-06 04:21:52.969527 | orchestrator | ok: [testbed-manager] 2026-04-06 04:21:52.969655 | orchestrator | 2026-04-06 04:21:52.969752 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-06 04:21:57.070598 | orchestrator | changed: [testbed-manager] 2026-04-06 04:21:57.070730 | orchestrator | 2026-04-06 04:21:57.070742 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-06 04:22:59.960663 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-06 04:22:59.960806 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-06 04:22:59.960823 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-06 04:22:59.960837 | orchestrator | changed: [testbed-manager] 2026-04-06 04:22:59.960850 | orchestrator | 2026-04-06 04:22:59.960863 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-06 04:23:06.996484 | orchestrator | changed: [testbed-manager] 2026-04-06 04:23:06.996618 | orchestrator | 2026-04-06 04:23:06.996637 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-06 04:23:07.088513 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-06 04:23:07.088629 | orchestrator | 2026-04-06 04:23:07.088642 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-06 04:23:07.088651 | orchestrator | 2026-04-06 04:23:07.088658 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-06 04:23:07.150763 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:23:07.150834 | orchestrator | 2026-04-06 04:23:07.150843 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-06 04:23:07.250341 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-06 04:23:07.250441 | orchestrator | 2026-04-06 04:23:07.250457 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-06 04:23:08.386243 | orchestrator | changed: [testbed-manager] 2026-04-06 04:23:08.386346 | orchestrator | 2026-04-06 04:23:08.386363 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-06 04:23:11.818735 | orchestrator | ok: [testbed-manager] 2026-04-06 04:23:11.818847 | orchestrator | 2026-04-06 04:23:11.818856 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-06 04:23:11.914456 | orchestrator | ok: [testbed-manager] => { 2026-04-06 04:23:11.914559 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-06 04:23:11.914574 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-06 04:23:11.914587 | orchestrator | "Checking running containers against expected versions...", 2026-04-06 04:23:11.914600 | orchestrator | "", 2026-04-06 04:23:11.914613 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-06 04:23:11.914626 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-06 04:23:11.914638 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.914650 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-06 04:23:11.914662 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.914673 | orchestrator | "", 2026-04-06 04:23:11.914685 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-06 04:23:11.914697 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-06 04:23:11.914709 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.914721 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-06 04:23:11.914733 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.914795 | orchestrator | "", 2026-04-06 04:23:11.914811 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-06 04:23:11.914823 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-06 04:23:11.914836 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.914848 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-06 04:23:11.914861 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.914874 | orchestrator | "", 2026-04-06 04:23:11.914888 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-06 04:23:11.914901 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-06 04:23:11.914925 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.914937 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-06 04:23:11.914950 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.914963 | orchestrator | "", 2026-04-06 04:23:11.914977 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-06 04:23:11.914991 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-06 04:23:11.915004 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915016 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-06 04:23:11.915025 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915034 | orchestrator | "", 2026-04-06 04:23:11.915042 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-06 04:23:11.915051 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915102 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915116 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915129 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915140 | orchestrator | "", 2026-04-06 04:23:11.915151 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-06 04:23:11.915159 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-06 04:23:11.915167 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915174 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-06 04:23:11.915182 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915189 | orchestrator | "", 2026-04-06 04:23:11.915196 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-06 04:23:11.915204 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-06 04:23:11.915211 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915219 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-06 04:23:11.915226 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915234 | orchestrator | "", 2026-04-06 04:23:11.915241 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-06 04:23:11.915248 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-06 04:23:11.915256 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915263 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-06 04:23:11.915270 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915277 | orchestrator | "", 2026-04-06 04:23:11.915289 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-06 04:23:11.915296 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-06 04:23:11.915305 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915312 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-06 04:23:11.915319 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915327 | orchestrator | "", 2026-04-06 04:23:11.915334 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-06 04:23:11.915341 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915349 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915356 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915363 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915371 | orchestrator | "", 2026-04-06 04:23:11.915378 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-06 04:23:11.915386 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915393 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915400 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915408 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915415 | orchestrator | "", 2026-04-06 04:23:11.915422 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-06 04:23:11.915430 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915440 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915452 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915464 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915475 | orchestrator | "", 2026-04-06 04:23:11.915487 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-06 04:23:11.915499 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915510 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915522 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915558 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915571 | orchestrator | "", 2026-04-06 04:23:11.915582 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-06 04:23:11.915595 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915607 | orchestrator | " Enabled: true", 2026-04-06 04:23:11.915629 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-06 04:23:11.915641 | orchestrator | " Status: ✅ MATCH", 2026-04-06 04:23:11.915653 | orchestrator | "", 2026-04-06 04:23:11.915665 | orchestrator | "=== Summary ===", 2026-04-06 04:23:11.915677 | orchestrator | "Errors (version mismatches): 0", 2026-04-06 04:23:11.915689 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-06 04:23:11.915699 | orchestrator | "", 2026-04-06 04:23:11.915707 | orchestrator | "✅ All running containers match expected versions!" 2026-04-06 04:23:11.915714 | orchestrator | ] 2026-04-06 04:23:11.915722 | orchestrator | } 2026-04-06 04:23:11.915729 | orchestrator | 2026-04-06 04:23:11.915736 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-06 04:23:11.986645 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:23:11.986792 | orchestrator | 2026-04-06 04:23:11.986810 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:23:11.986824 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-04-06 04:23:11.986835 | orchestrator | 2026-04-06 04:23:25.315712 | orchestrator | 2026-04-06 04:23:25 | INFO  | Task 71c5bd83-75cd-4d2f-969b-120960ab45b2 (sync inventory) is running in background. Output coming soon. 2026-04-06 04:24:01.259771 | orchestrator | 2026-04-06 04:23:26 | INFO  | Starting group_vars file reorganization 2026-04-06 04:24:01.259981 | orchestrator | 2026-04-06 04:23:26 | INFO  | Moved 0 file(s) to their respective directories 2026-04-06 04:24:01.260006 | orchestrator | 2026-04-06 04:23:26 | INFO  | Group_vars file reorganization completed 2026-04-06 04:24:01.260021 | orchestrator | 2026-04-06 04:23:30 | INFO  | Starting variable preparation from inventory 2026-04-06 04:24:01.260035 | orchestrator | 2026-04-06 04:23:33 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-06 04:24:01.260049 | orchestrator | 2026-04-06 04:23:33 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-06 04:24:01.260063 | orchestrator | 2026-04-06 04:23:33 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-06 04:24:01.260078 | orchestrator | 2026-04-06 04:23:33 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-06 04:24:01.260093 | orchestrator | 2026-04-06 04:23:33 | INFO  | Variable preparation completed 2026-04-06 04:24:01.260107 | orchestrator | 2026-04-06 04:23:35 | INFO  | Starting inventory overwrite handling 2026-04-06 04:24:01.260121 | orchestrator | 2026-04-06 04:23:35 | INFO  | Handling group overwrites in 99-overwrite 2026-04-06 04:24:01.260136 | orchestrator | 2026-04-06 04:23:35 | INFO  | Removing group frr:children from 60-generic 2026-04-06 04:24:01.260145 | orchestrator | 2026-04-06 04:23:35 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-06 04:24:01.260154 | orchestrator | 2026-04-06 04:23:35 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-06 04:24:01.260162 | orchestrator | 2026-04-06 04:23:35 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-06 04:24:01.260171 | orchestrator | 2026-04-06 04:23:35 | INFO  | Handling group overwrites in 20-roles 2026-04-06 04:24:01.260179 | orchestrator | 2026-04-06 04:23:35 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-06 04:24:01.260187 | orchestrator | 2026-04-06 04:23:35 | INFO  | Removed 5 group(s) in total 2026-04-06 04:24:01.260195 | orchestrator | 2026-04-06 04:23:35 | INFO  | Inventory overwrite handling completed 2026-04-06 04:24:01.260203 | orchestrator | 2026-04-06 04:23:36 | INFO  | Starting merge of inventory files 2026-04-06 04:24:01.260211 | orchestrator | 2026-04-06 04:23:36 | INFO  | Inventory files merged successfully 2026-04-06 04:24:01.260219 | orchestrator | 2026-04-06 04:23:42 | INFO  | Generating minified hosts file 2026-04-06 04:24:01.260251 | orchestrator | 2026-04-06 04:23:44 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-06 04:24:01.260271 | orchestrator | 2026-04-06 04:23:44 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-06 04:24:01.260280 | orchestrator | 2026-04-06 04:23:46 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-06 04:24:01.260288 | orchestrator | 2026-04-06 04:23:59 | INFO  | Successfully wrote ClusterShell configuration 2026-04-06 04:24:01.509860 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-06 04:24:01.509972 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-06 04:24:01.509991 | orchestrator | + local max_attempts=60 2026-04-06 04:24:01.510007 | orchestrator | + local name=kolla-ansible 2026-04-06 04:24:01.510084 | orchestrator | + local attempt_num=1 2026-04-06 04:24:01.510771 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-06 04:24:01.547172 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-06 04:24:01.547259 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-06 04:24:01.547273 | orchestrator | + local max_attempts=60 2026-04-06 04:24:01.547285 | orchestrator | + local name=osism-ansible 2026-04-06 04:24:01.547296 | orchestrator | + local attempt_num=1 2026-04-06 04:24:01.547522 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-06 04:24:01.577220 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-06 04:24:01.577315 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-06 04:24:01.784948 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-06 04:24:01.785019 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-06 04:24:01.785026 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-06 04:24:01.785031 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-06 04:24:01.785090 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-04-06 04:24:01.785097 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-04-06 04:24:01.785103 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-04-06 04:24:01.785109 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-04-06 04:24:01.785116 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 36 seconds ago 2026-04-06 04:24:01.785122 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-04-06 04:24:01.785128 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-04-06 04:24:01.785156 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-04-06 04:24:01.785163 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-04-06 04:24:01.785169 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-04-06 04:24:01.785175 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-04-06 04:24:01.785181 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-04-06 04:24:01.792256 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-04-06 04:24:01.792321 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-04-06 04:24:01.792326 | orchestrator | + osism apply facts 2026-04-06 04:24:13.347558 | orchestrator | 2026-04-06 04:24:13 | INFO  | Prepare task for execution of facts. 2026-04-06 04:24:13.435502 | orchestrator | 2026-04-06 04:24:13 | INFO  | Task 4420cd9e-cc4c-4d01-8815-8c13a9b2ee16 (facts) was prepared for execution. 2026-04-06 04:24:13.435597 | orchestrator | 2026-04-06 04:24:13 | INFO  | It takes a moment until task 4420cd9e-cc4c-4d01-8815-8c13a9b2ee16 (facts) has been started and output is visible here. 2026-04-06 04:24:40.880328 | orchestrator | 2026-04-06 04:24:40.880440 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-06 04:24:40.880457 | orchestrator | 2026-04-06 04:24:40.880467 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-06 04:24:40.880475 | orchestrator | Monday 06 April 2026 04:24:20 +0000 (0:00:02.772) 0:00:02.772 ********** 2026-04-06 04:24:40.880482 | orchestrator | ok: [testbed-manager] 2026-04-06 04:24:40.880491 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:24:40.880498 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:24:40.880504 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:24:40.880511 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:24:40.880517 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:24:40.880523 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:24:40.880529 | orchestrator | 2026-04-06 04:24:40.880540 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-06 04:24:40.880554 | orchestrator | Monday 06 April 2026 04:24:24 +0000 (0:00:03.889) 0:00:06.662 ********** 2026-04-06 04:24:40.880567 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:24:40.880581 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:24:40.880596 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:24:40.880609 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:24:40.880621 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:24:40.880635 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:24:40.880649 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:24:40.880661 | orchestrator | 2026-04-06 04:24:40.880675 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-06 04:24:40.880688 | orchestrator | 2026-04-06 04:24:40.880704 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-06 04:24:40.880718 | orchestrator | Monday 06 April 2026 04:24:28 +0000 (0:00:03.964) 0:00:10.627 ********** 2026-04-06 04:24:40.880727 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:24:40.880735 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:24:40.880741 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:24:40.880747 | orchestrator | ok: [testbed-manager] 2026-04-06 04:24:40.880753 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:24:40.880758 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:24:40.880793 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:24:40.880799 | orchestrator | 2026-04-06 04:24:40.880806 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-06 04:24:40.880861 | orchestrator | 2026-04-06 04:24:40.880872 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-06 04:24:40.880879 | orchestrator | Monday 06 April 2026 04:24:36 +0000 (0:00:08.593) 0:00:19.220 ********** 2026-04-06 04:24:40.880886 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:24:40.880894 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:24:40.880900 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:24:40.880907 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:24:40.880914 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:24:40.880921 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:24:40.880928 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:24:40.880935 | orchestrator | 2026-04-06 04:24:40.880942 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:24:40.880950 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:24:40.880959 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:24:40.880967 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:24:40.880974 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:24:40.880982 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:24:40.880989 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:24:40.880997 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 04:24:40.881004 | orchestrator | 2026-04-06 04:24:40.881011 | orchestrator | 2026-04-06 04:24:40.881019 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:24:40.881027 | orchestrator | Monday 06 April 2026 04:24:40 +0000 (0:00:03.809) 0:00:23.030 ********** 2026-04-06 04:24:40.881034 | orchestrator | =============================================================================== 2026-04-06 04:24:40.881042 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.59s 2026-04-06 04:24:40.881049 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 3.96s 2026-04-06 04:24:40.881056 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.89s 2026-04-06 04:24:40.881064 | orchestrator | Gather facts for all hosts ---------------------------------------------- 3.81s 2026-04-06 04:24:41.119783 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-06 04:24:41.191208 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 04:24:41.191754 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-06 04:24:41.231645 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-04-06 04:24:41.231739 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-04-06 04:24:41.236976 | orchestrator | + set -e 2026-04-06 04:24:41.237033 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-04-06 04:24:41.237043 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-06 04:24:41.243275 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-04-06 04:24:41.252202 | orchestrator | 2026-04-06 04:24:41.252281 | orchestrator | # UPGRADE SERVICES 2026-04-06 04:24:41.252292 | orchestrator | 2026-04-06 04:24:41.252299 | orchestrator | + set -e 2026-04-06 04:24:41.252306 | orchestrator | + echo 2026-04-06 04:24:41.252313 | orchestrator | + echo '# UPGRADE SERVICES' 2026-04-06 04:24:41.252344 | orchestrator | + echo 2026-04-06 04:24:41.252351 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 04:24:41.252358 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 04:24:41.252364 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 04:24:41.252370 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 04:24:41.252377 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 04:24:41.252383 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 04:24:41.252391 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 04:24:41.252397 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 04:24:41.252403 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 04:24:41.252409 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 04:24:41.252415 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 04:24:41.252422 | orchestrator | ++ export ARA=false 2026-04-06 04:24:41.252428 | orchestrator | ++ ARA=false 2026-04-06 04:24:41.252434 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 04:24:41.252440 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 04:24:41.252447 | orchestrator | ++ export TEMPEST=false 2026-04-06 04:24:41.252453 | orchestrator | ++ TEMPEST=false 2026-04-06 04:24:41.252459 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 04:24:41.252466 | orchestrator | ++ IS_ZUUL=true 2026-04-06 04:24:41.252472 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 04:24:41.252478 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 04:24:41.252485 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 04:24:41.252491 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 04:24:41.252497 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 04:24:41.252503 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 04:24:41.252509 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 04:24:41.252515 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 04:24:41.252521 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 04:24:41.252528 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 04:24:41.252534 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-06 04:24:41.252540 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-06 04:24:41.252547 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-04-06 04:24:41.252553 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-04-06 04:24:41.252559 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-06 04:24:41.261017 | orchestrator | + set -e 2026-04-06 04:24:41.261081 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 04:24:41.262126 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 04:24:41.262170 | orchestrator | ++ INTERACTIVE=false 2026-04-06 04:24:41.262178 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 04:24:41.262186 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 04:24:41.262193 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 04:24:41.262198 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 04:24:41.262202 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 04:24:41.262207 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 04:24:41.262212 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 04:24:41.262217 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 04:24:41.262223 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 04:24:41.262228 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 04:24:41.262233 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 04:24:41.262238 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 04:24:41.262244 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 04:24:41.262251 | orchestrator | ++ export ARA=false 2026-04-06 04:24:41.262258 | orchestrator | ++ ARA=false 2026-04-06 04:24:41.262269 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 04:24:41.262278 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 04:24:41.262285 | orchestrator | ++ export TEMPEST=false 2026-04-06 04:24:41.262292 | orchestrator | ++ TEMPEST=false 2026-04-06 04:24:41.262300 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 04:24:41.262307 | orchestrator | ++ IS_ZUUL=true 2026-04-06 04:24:41.262314 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 04:24:41.262322 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 04:24:41.262329 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 04:24:41.262344 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 04:24:41.262351 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 04:24:41.262358 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 04:24:41.262365 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 04:24:41.262372 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 04:24:41.262379 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 04:24:41.262387 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 04:24:41.262394 | orchestrator | 2026-04-06 04:24:41.262401 | orchestrator | # PULL IMAGES 2026-04-06 04:24:41.262418 | orchestrator | 2026-04-06 04:24:41.262450 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-06 04:24:41.262458 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-06 04:24:41.262466 | orchestrator | + echo 2026-04-06 04:24:41.262474 | orchestrator | + echo '# PULL IMAGES' 2026-04-06 04:24:41.262481 | orchestrator | + echo 2026-04-06 04:24:41.263308 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-06 04:24:41.323543 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 04:24:41.323643 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-06 04:24:42.731104 | orchestrator | 2026-04-06 04:24:42 | INFO  | Trying to run play pull-images in environment custom 2026-04-06 04:24:52.852904 | orchestrator | 2026-04-06 04:24:52 | INFO  | Prepare task for execution of pull-images. 2026-04-06 04:24:52.947504 | orchestrator | 2026-04-06 04:24:52 | INFO  | Task de7d9107-b34b-424b-8480-a35a0ac92513 (pull-images) was prepared for execution. 2026-04-06 04:24:52.947600 | orchestrator | 2026-04-06 04:24:52 | INFO  | Task de7d9107-b34b-424b-8480-a35a0ac92513 is running in background. No more output. Check ARA for logs. 2026-04-06 04:24:53.238448 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-04-06 04:24:53.248379 | orchestrator | + set -e 2026-04-06 04:24:53.248464 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 04:24:53.248478 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 04:24:53.248489 | orchestrator | ++ INTERACTIVE=false 2026-04-06 04:24:53.248500 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 04:24:53.248509 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 04:24:53.248519 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-06 04:24:53.250779 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-06 04:24:53.262288 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-06 04:24:53.262348 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-06 04:24:53.262651 | orchestrator | ++ semver 10.0.0 8.0.3 2026-04-06 04:24:53.331084 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 04:24:53.331155 | orchestrator | + osism apply frr 2026-04-06 04:25:04.824985 | orchestrator | 2026-04-06 04:25:04 | INFO  | Prepare task for execution of frr. 2026-04-06 04:25:04.908712 | orchestrator | 2026-04-06 04:25:04 | INFO  | Task f2e6e282-38ac-49d8-9713-4a33c3eed293 (frr) was prepared for execution. 2026-04-06 04:25:04.908821 | orchestrator | 2026-04-06 04:25:04 | INFO  | It takes a moment until task f2e6e282-38ac-49d8-9713-4a33c3eed293 (frr) has been started and output is visible here. 2026-04-06 04:25:45.080249 | orchestrator | 2026-04-06 04:25:45.080357 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-06 04:25:45.080374 | orchestrator | 2026-04-06 04:25:45.080385 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-06 04:25:45.080395 | orchestrator | Monday 06 April 2026 04:25:12 +0000 (0:00:03.685) 0:00:03.685 ********** 2026-04-06 04:25:45.080406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-06 04:25:45.080417 | orchestrator | 2026-04-06 04:25:45.080427 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-06 04:25:45.080436 | orchestrator | Monday 06 April 2026 04:25:17 +0000 (0:00:04.688) 0:00:08.374 ********** 2026-04-06 04:25:45.080446 | orchestrator | ok: [testbed-manager] 2026-04-06 04:25:45.080457 | orchestrator | 2026-04-06 04:25:45.080467 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-06 04:25:45.080477 | orchestrator | Monday 06 April 2026 04:25:19 +0000 (0:00:02.638) 0:00:11.012 ********** 2026-04-06 04:25:45.080487 | orchestrator | ok: [testbed-manager] 2026-04-06 04:25:45.080497 | orchestrator | 2026-04-06 04:25:45.080516 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-06 04:25:45.080526 | orchestrator | Monday 06 April 2026 04:25:22 +0000 (0:00:03.023) 0:00:14.036 ********** 2026-04-06 04:25:45.080536 | orchestrator | ok: [testbed-manager] 2026-04-06 04:25:45.080546 | orchestrator | 2026-04-06 04:25:45.080555 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-06 04:25:45.080565 | orchestrator | Monday 06 April 2026 04:25:24 +0000 (0:00:02.021) 0:00:16.058 ********** 2026-04-06 04:25:45.080596 | orchestrator | ok: [testbed-manager] 2026-04-06 04:25:45.080607 | orchestrator | 2026-04-06 04:25:45.080632 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-06 04:25:45.080642 | orchestrator | Monday 06 April 2026 04:25:27 +0000 (0:00:02.135) 0:00:18.194 ********** 2026-04-06 04:25:45.080660 | orchestrator | ok: [testbed-manager] 2026-04-06 04:25:45.080670 | orchestrator | 2026-04-06 04:25:45.080680 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-06 04:25:45.080694 | orchestrator | Monday 06 April 2026 04:25:29 +0000 (0:00:02.722) 0:00:20.916 ********** 2026-04-06 04:25:45.080704 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:25:45.080715 | orchestrator | 2026-04-06 04:25:45.080725 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-06 04:25:45.080734 | orchestrator | Monday 06 April 2026 04:25:31 +0000 (0:00:01.355) 0:00:22.271 ********** 2026-04-06 04:25:45.080744 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:25:45.080754 | orchestrator | 2026-04-06 04:25:45.080763 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-06 04:25:45.080773 | orchestrator | Monday 06 April 2026 04:25:32 +0000 (0:00:01.222) 0:00:23.494 ********** 2026-04-06 04:25:45.080783 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:25:45.080794 | orchestrator | 2026-04-06 04:25:45.080806 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-06 04:25:45.080818 | orchestrator | Monday 06 April 2026 04:25:33 +0000 (0:00:01.301) 0:00:24.796 ********** 2026-04-06 04:25:45.080829 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:25:45.080841 | orchestrator | 2026-04-06 04:25:45.080853 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-06 04:25:45.080907 | orchestrator | Monday 06 April 2026 04:25:34 +0000 (0:00:01.248) 0:00:26.044 ********** 2026-04-06 04:25:45.080919 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:25:45.080931 | orchestrator | 2026-04-06 04:25:45.080943 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-06 04:25:45.080954 | orchestrator | Monday 06 April 2026 04:25:36 +0000 (0:00:01.201) 0:00:27.246 ********** 2026-04-06 04:25:45.080965 | orchestrator | ok: [testbed-manager] 2026-04-06 04:25:45.080977 | orchestrator | 2026-04-06 04:25:45.080989 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-06 04:25:45.081001 | orchestrator | Monday 06 April 2026 04:25:38 +0000 (0:00:02.194) 0:00:29.441 ********** 2026-04-06 04:25:45.081012 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-06 04:25:45.081023 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-06 04:25:45.081036 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-06 04:25:45.081047 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-06 04:25:45.081058 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-06 04:25:45.081070 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-06 04:25:45.081081 | orchestrator | 2026-04-06 04:25:45.081093 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-06 04:25:45.081104 | orchestrator | Monday 06 April 2026 04:25:42 +0000 (0:00:03.698) 0:00:33.140 ********** 2026-04-06 04:25:45.081116 | orchestrator | ok: [testbed-manager] 2026-04-06 04:25:45.081127 | orchestrator | 2026-04-06 04:25:45.081139 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:25:45.081150 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 04:25:45.081162 | orchestrator | 2026-04-06 04:25:45.081172 | orchestrator | 2026-04-06 04:25:45.081190 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:25:45.081200 | orchestrator | Monday 06 April 2026 04:25:44 +0000 (0:00:02.700) 0:00:35.840 ********** 2026-04-06 04:25:45.081209 | orchestrator | =============================================================================== 2026-04-06 04:25:45.081234 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 4.69s 2026-04-06 04:25:45.081244 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.70s 2026-04-06 04:25:45.081254 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.02s 2026-04-06 04:25:45.081263 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.72s 2026-04-06 04:25:45.081273 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.70s 2026-04-06 04:25:45.081282 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.64s 2026-04-06 04:25:45.081292 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.20s 2026-04-06 04:25:45.081301 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 2.14s 2026-04-06 04:25:45.081311 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 2.02s 2026-04-06 04:25:45.081320 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 1.36s 2026-04-06 04:25:45.081330 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 1.30s 2026-04-06 04:25:45.081339 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.25s 2026-04-06 04:25:45.081349 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 1.22s 2026-04-06 04:25:45.081358 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.20s 2026-04-06 04:25:45.316405 | orchestrator | + osism apply kubernetes 2026-04-06 04:25:46.704062 | orchestrator | 2026-04-06 04:25:46 | INFO  | Prepare task for execution of kubernetes. 2026-04-06 04:25:46.775745 | orchestrator | 2026-04-06 04:25:46 | INFO  | Task afe8d334-e1b2-4a33-91ab-f63563608726 (kubernetes) was prepared for execution. 2026-04-06 04:25:46.775844 | orchestrator | 2026-04-06 04:25:46 | INFO  | It takes a moment until task afe8d334-e1b2-4a33-91ab-f63563608726 (kubernetes) has been started and output is visible here. 2026-04-06 04:26:35.830417 | orchestrator | 2026-04-06 04:26:35.830595 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-06 04:26:35.830626 | orchestrator | 2026-04-06 04:26:35.830647 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-06 04:26:35.830679 | orchestrator | Monday 06 April 2026 04:25:52 +0000 (0:00:01.861) 0:00:01.861 ********** 2026-04-06 04:26:35.830698 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:26:35.830714 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:26:35.830725 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:26:35.830736 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:26:35.830751 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:26:35.830769 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:26:35.830789 | orchestrator | 2026-04-06 04:26:35.830822 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-06 04:26:35.830842 | orchestrator | Monday 06 April 2026 04:25:58 +0000 (0:00:05.601) 0:00:07.462 ********** 2026-04-06 04:26:35.830860 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.830881 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.830929 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:26:35.830952 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:26:35.830972 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:26:35.830992 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:26:35.831013 | orchestrator | 2026-04-06 04:26:35.831034 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-06 04:26:35.831054 | orchestrator | Monday 06 April 2026 04:26:00 +0000 (0:00:02.446) 0:00:09.909 ********** 2026-04-06 04:26:35.831108 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.831130 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.831149 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:26:35.831169 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:26:35.831188 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:26:35.831205 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:26:35.831216 | orchestrator | 2026-04-06 04:26:35.831227 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-06 04:26:35.831238 | orchestrator | Monday 06 April 2026 04:26:02 +0000 (0:00:02.058) 0:00:11.967 ********** 2026-04-06 04:26:35.831249 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:26:35.831260 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:26:35.831271 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:26:35.831282 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:26:35.831293 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:26:35.831303 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:26:35.831314 | orchestrator | 2026-04-06 04:26:35.831326 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-06 04:26:35.831337 | orchestrator | Monday 06 April 2026 04:26:06 +0000 (0:00:03.823) 0:00:15.791 ********** 2026-04-06 04:26:35.831348 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:26:35.831359 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:26:35.831369 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:26:35.831380 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:26:35.831391 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:26:35.831402 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:26:35.831412 | orchestrator | 2026-04-06 04:26:35.831423 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-06 04:26:35.831437 | orchestrator | Monday 06 April 2026 04:26:08 +0000 (0:00:02.302) 0:00:18.094 ********** 2026-04-06 04:26:35.831456 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:26:35.831474 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:26:35.831491 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:26:35.831510 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:26:35.831526 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:26:35.831537 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:26:35.831547 | orchestrator | 2026-04-06 04:26:35.831559 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-06 04:26:35.831570 | orchestrator | Monday 06 April 2026 04:26:11 +0000 (0:00:02.693) 0:00:20.788 ********** 2026-04-06 04:26:35.831604 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.831684 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.831698 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:26:35.831709 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:26:35.831721 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:26:35.831731 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:26:35.831742 | orchestrator | 2026-04-06 04:26:35.831753 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-06 04:26:35.831764 | orchestrator | Monday 06 April 2026 04:26:14 +0000 (0:00:03.000) 0:00:23.789 ********** 2026-04-06 04:26:35.831775 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.831785 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.831796 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:26:35.831807 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:26:35.831818 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:26:35.831828 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:26:35.831839 | orchestrator | 2026-04-06 04:26:35.831850 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-06 04:26:35.831876 | orchestrator | Monday 06 April 2026 04:26:17 +0000 (0:00:02.792) 0:00:26.582 ********** 2026-04-06 04:26:35.831994 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 04:26:35.832019 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 04:26:35.832038 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.832086 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 04:26:35.832105 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 04:26:35.832122 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.832139 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 04:26:35.832157 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 04:26:35.832176 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:26:35.832195 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 04:26:35.832211 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 04:26:35.832223 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:26:35.832258 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 04:26:35.832270 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 04:26:35.832281 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:26:35.832292 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 04:26:35.832302 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 04:26:35.832313 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:26:35.832323 | orchestrator | 2026-04-06 04:26:35.832334 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-06 04:26:35.832345 | orchestrator | Monday 06 April 2026 04:26:19 +0000 (0:00:02.102) 0:00:28.684 ********** 2026-04-06 04:26:35.832355 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.832366 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.832377 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:26:35.832387 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:26:35.832398 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:26:35.832408 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:26:35.832419 | orchestrator | 2026-04-06 04:26:35.832429 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-06 04:26:35.832441 | orchestrator | Monday 06 April 2026 04:26:21 +0000 (0:00:02.230) 0:00:30.914 ********** 2026-04-06 04:26:35.832452 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:26:35.832463 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:26:35.832473 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:26:35.832484 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:26:35.832494 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:26:35.832505 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:26:35.832516 | orchestrator | 2026-04-06 04:26:35.832526 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-06 04:26:35.832537 | orchestrator | Monday 06 April 2026 04:26:23 +0000 (0:00:01.856) 0:00:32.771 ********** 2026-04-06 04:26:35.832548 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:26:35.832558 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:26:35.832569 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:26:35.832579 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:26:35.832590 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:26:35.832600 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:26:35.832611 | orchestrator | 2026-04-06 04:26:35.832622 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-06 04:26:35.832633 | orchestrator | Monday 06 April 2026 04:26:26 +0000 (0:00:02.987) 0:00:35.759 ********** 2026-04-06 04:26:35.832643 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.832654 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.832665 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:26:35.832681 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:26:35.832692 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:26:35.832702 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:26:35.832713 | orchestrator | 2026-04-06 04:26:35.832724 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-06 04:26:35.832810 | orchestrator | Monday 06 April 2026 04:26:28 +0000 (0:00:02.024) 0:00:37.783 ********** 2026-04-06 04:26:35.832830 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.832848 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.832866 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:26:35.832883 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:26:35.833040 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:26:35.833070 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:26:35.833082 | orchestrator | 2026-04-06 04:26:35.833093 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-06 04:26:35.833106 | orchestrator | Monday 06 April 2026 04:26:31 +0000 (0:00:02.423) 0:00:40.206 ********** 2026-04-06 04:26:35.833117 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.833127 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.833138 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:26:35.833155 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:26:35.833178 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:26:35.833205 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:26:35.833222 | orchestrator | 2026-04-06 04:26:35.833239 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-06 04:26:35.833256 | orchestrator | Monday 06 April 2026 04:26:33 +0000 (0:00:02.227) 0:00:42.433 ********** 2026-04-06 04:26:35.833274 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-06 04:26:35.833290 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-06 04:26:35.833308 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.833328 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-06 04:26:35.833345 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-06 04:26:35.833364 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.833383 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-06 04:26:35.833401 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-06 04:26:35.833413 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:26:35.833424 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-06 04:26:35.833434 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-06 04:26:35.833445 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:26:35.833456 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-06 04:26:35.833467 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-06 04:26:35.833477 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:26:35.833488 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-06 04:26:35.833499 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-06 04:26:35.833509 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:26:35.833520 | orchestrator | 2026-04-06 04:26:35.833531 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-06 04:26:35.833542 | orchestrator | Monday 06 April 2026 04:26:35 +0000 (0:00:01.954) 0:00:44.388 ********** 2026-04-06 04:26:35.833561 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:26:35.833572 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:26:35.833598 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:28:26.443223 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:28:26.443373 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:28:26.443406 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:28:26.443431 | orchestrator | 2026-04-06 04:28:26.443448 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-06 04:28:26.443463 | orchestrator | Monday 06 April 2026 04:26:37 +0000 (0:00:02.137) 0:00:46.526 ********** 2026-04-06 04:28:26.443477 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:28:26.443490 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:28:26.443503 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:28:26.443543 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:28:26.443557 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:28:26.443570 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:28:26.443583 | orchestrator | 2026-04-06 04:28:26.443597 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-06 04:28:26.443610 | orchestrator | 2026-04-06 04:28:26.443623 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-06 04:28:26.443640 | orchestrator | Monday 06 April 2026 04:26:40 +0000 (0:00:03.264) 0:00:49.791 ********** 2026-04-06 04:28:26.443654 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.443670 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.443680 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.443688 | orchestrator | 2026-04-06 04:28:26.443696 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-06 04:28:26.443705 | orchestrator | Monday 06 April 2026 04:26:43 +0000 (0:00:02.930) 0:00:52.721 ********** 2026-04-06 04:28:26.443713 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.443721 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.443739 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.443747 | orchestrator | 2026-04-06 04:28:26.443757 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-06 04:28:26.443766 | orchestrator | Monday 06 April 2026 04:26:46 +0000 (0:00:02.721) 0:00:55.443 ********** 2026-04-06 04:28:26.443775 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:28:26.443785 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:28:26.443795 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:28:26.443805 | orchestrator | 2026-04-06 04:28:26.443815 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-06 04:28:26.443824 | orchestrator | Monday 06 April 2026 04:26:48 +0000 (0:00:02.234) 0:00:57.677 ********** 2026-04-06 04:28:26.443833 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.443842 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.443851 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.443860 | orchestrator | 2026-04-06 04:28:26.443870 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-06 04:28:26.443879 | orchestrator | Monday 06 April 2026 04:26:50 +0000 (0:00:01.784) 0:00:59.461 ********** 2026-04-06 04:28:26.443888 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:28:26.443898 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:28:26.443907 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:28:26.443916 | orchestrator | 2026-04-06 04:28:26.443926 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-06 04:28:26.443935 | orchestrator | Monday 06 April 2026 04:26:51 +0000 (0:00:01.417) 0:01:00.879 ********** 2026-04-06 04:28:26.443945 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.443954 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.444022 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.444033 | orchestrator | 2026-04-06 04:28:26.444043 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-06 04:28:26.444052 | orchestrator | Monday 06 April 2026 04:26:53 +0000 (0:00:02.111) 0:01:02.990 ********** 2026-04-06 04:28:26.444061 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.444070 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.444079 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.444088 | orchestrator | 2026-04-06 04:28:26.444098 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-06 04:28:26.444107 | orchestrator | Monday 06 April 2026 04:26:56 +0000 (0:00:02.386) 0:01:05.377 ********** 2026-04-06 04:28:26.444117 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:28:26.444126 | orchestrator | 2026-04-06 04:28:26.444134 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-06 04:28:26.444142 | orchestrator | Monday 06 April 2026 04:26:58 +0000 (0:00:01.988) 0:01:07.365 ********** 2026-04-06 04:28:26.444159 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.444167 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.444175 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.444183 | orchestrator | 2026-04-06 04:28:26.444191 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-06 04:28:26.444199 | orchestrator | Monday 06 April 2026 04:27:01 +0000 (0:00:02.813) 0:01:10.179 ********** 2026-04-06 04:28:26.444207 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:28:26.444214 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:28:26.444222 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.444230 | orchestrator | 2026-04-06 04:28:26.444238 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-06 04:28:26.444246 | orchestrator | Monday 06 April 2026 04:27:02 +0000 (0:00:01.644) 0:01:11.824 ********** 2026-04-06 04:28:26.444254 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:28:26.444261 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:28:26.444269 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:28:26.444277 | orchestrator | 2026-04-06 04:28:26.444285 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-06 04:28:26.444293 | orchestrator | Monday 06 April 2026 04:27:04 +0000 (0:00:02.049) 0:01:13.873 ********** 2026-04-06 04:28:26.444301 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:28:26.444308 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:28:26.444317 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:28:26.444325 | orchestrator | 2026-04-06 04:28:26.444333 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-06 04:28:26.444340 | orchestrator | Monday 06 April 2026 04:27:07 +0000 (0:00:02.528) 0:01:16.402 ********** 2026-04-06 04:28:26.444349 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:28:26.444357 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:28:26.444385 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:28:26.444393 | orchestrator | 2026-04-06 04:28:26.444402 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-06 04:28:26.444410 | orchestrator | Monday 06 April 2026 04:27:08 +0000 (0:00:01.574) 0:01:17.977 ********** 2026-04-06 04:28:26.444418 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:28:26.444426 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:28:26.444434 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:28:26.444441 | orchestrator | 2026-04-06 04:28:26.444449 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-06 04:28:26.444457 | orchestrator | Monday 06 April 2026 04:27:10 +0000 (0:00:01.464) 0:01:19.441 ********** 2026-04-06 04:28:26.444465 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:28:26.444473 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:28:26.444480 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:28:26.444488 | orchestrator | 2026-04-06 04:28:26.444496 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-06 04:28:26.444520 | orchestrator | Monday 06 April 2026 04:27:12 +0000 (0:00:02.302) 0:01:21.744 ********** 2026-04-06 04:28:26.444528 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.444536 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.444544 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.444552 | orchestrator | 2026-04-06 04:28:26.444560 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-06 04:28:26.444568 | orchestrator | Monday 06 April 2026 04:27:14 +0000 (0:00:02.284) 0:01:24.028 ********** 2026-04-06 04:28:26.444576 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.444584 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.444591 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.444599 | orchestrator | 2026-04-06 04:28:26.444607 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-06 04:28:26.444615 | orchestrator | Monday 06 April 2026 04:27:16 +0000 (0:00:01.472) 0:01:25.501 ********** 2026-04-06 04:28:26.444623 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-06 04:28:26.444650 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-06 04:28:26.444658 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-06 04:28:26.444666 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-06 04:28:26.444674 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-06 04:28:26.444682 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-06 04:28:26.444690 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-06 04:28:26.444698 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-06 04:28:26.444706 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-06 04:28:26.444713 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.444721 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.444729 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.444737 | orchestrator | 2026-04-06 04:28:26.444745 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-06 04:28:26.444753 | orchestrator | Monday 06 April 2026 04:27:50 +0000 (0:00:33.819) 0:01:59.320 ********** 2026-04-06 04:28:26.444761 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:28:26.444768 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:28:26.444776 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:28:26.444784 | orchestrator | 2026-04-06 04:28:26.444792 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-06 04:28:26.444800 | orchestrator | Monday 06 April 2026 04:27:51 +0000 (0:00:01.427) 0:02:00.748 ********** 2026-04-06 04:28:26.444807 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:28:26.444815 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:28:26.444823 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:28:26.444831 | orchestrator | 2026-04-06 04:28:26.444839 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-06 04:28:26.444846 | orchestrator | Monday 06 April 2026 04:27:54 +0000 (0:00:02.652) 0:02:03.400 ********** 2026-04-06 04:28:26.444854 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.444862 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.444870 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.444878 | orchestrator | 2026-04-06 04:28:26.444886 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-06 04:28:26.444893 | orchestrator | Monday 06 April 2026 04:27:56 +0000 (0:00:02.336) 0:02:05.737 ********** 2026-04-06 04:28:26.444901 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:28:26.444909 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:28:26.444917 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:28:26.444925 | orchestrator | 2026-04-06 04:28:26.444932 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-06 04:28:26.444940 | orchestrator | Monday 06 April 2026 04:28:24 +0000 (0:00:27.888) 0:02:33.625 ********** 2026-04-06 04:28:26.444948 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:28:26.444956 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:28:26.444984 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:28:26.444993 | orchestrator | 2026-04-06 04:28:26.445006 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-06 04:28:26.445020 | orchestrator | Monday 06 April 2026 04:28:26 +0000 (0:00:01.958) 0:02:35.584 ********** 2026-04-06 04:29:17.707379 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:29:17.707499 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:29:17.707516 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:29:17.707528 | orchestrator | 2026-04-06 04:29:17.707543 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-06 04:29:17.707565 | orchestrator | Monday 06 April 2026 04:28:28 +0000 (0:00:01.842) 0:02:37.427 ********** 2026-04-06 04:29:17.707584 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:29:17.707604 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:29:17.707622 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:29:17.707640 | orchestrator | 2026-04-06 04:29:17.707658 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-06 04:29:17.707678 | orchestrator | Monday 06 April 2026 04:28:30 +0000 (0:00:01.836) 0:02:39.264 ********** 2026-04-06 04:29:17.707697 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:29:17.707717 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:29:17.707736 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:29:17.707753 | orchestrator | 2026-04-06 04:29:17.707771 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-06 04:29:17.707791 | orchestrator | Monday 06 April 2026 04:28:31 +0000 (0:00:01.829) 0:02:41.093 ********** 2026-04-06 04:29:17.707810 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:29:17.707826 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:29:17.707837 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:29:17.707848 | orchestrator | 2026-04-06 04:29:17.707859 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-06 04:29:17.707870 | orchestrator | Monday 06 April 2026 04:28:33 +0000 (0:00:01.752) 0:02:42.845 ********** 2026-04-06 04:29:17.707882 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:29:17.707910 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:29:17.707936 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:29:17.707950 | orchestrator | 2026-04-06 04:29:17.707964 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-06 04:29:17.707976 | orchestrator | Monday 06 April 2026 04:28:35 +0000 (0:00:01.806) 0:02:44.652 ********** 2026-04-06 04:29:17.707989 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:29:17.708036 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:29:17.708050 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:29:17.708062 | orchestrator | 2026-04-06 04:29:17.708075 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-06 04:29:17.708088 | orchestrator | Monday 06 April 2026 04:28:37 +0000 (0:00:01.979) 0:02:46.631 ********** 2026-04-06 04:29:17.708100 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:29:17.708114 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:29:17.708127 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:29:17.708140 | orchestrator | 2026-04-06 04:29:17.708151 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-06 04:29:17.708162 | orchestrator | Monday 06 April 2026 04:28:39 +0000 (0:00:02.225) 0:02:48.857 ********** 2026-04-06 04:29:17.708173 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:29:17.708184 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:29:17.708195 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:29:17.708206 | orchestrator | 2026-04-06 04:29:17.708217 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-06 04:29:17.708228 | orchestrator | Monday 06 April 2026 04:28:42 +0000 (0:00:02.341) 0:02:51.198 ********** 2026-04-06 04:29:17.708243 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:29:17.708262 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:29:17.708281 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:29:17.708301 | orchestrator | 2026-04-06 04:29:17.708321 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-06 04:29:17.708336 | orchestrator | Monday 06 April 2026 04:28:43 +0000 (0:00:01.403) 0:02:52.601 ********** 2026-04-06 04:29:17.708347 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:29:17.708358 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:29:17.708394 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:29:17.708405 | orchestrator | 2026-04-06 04:29:17.708416 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-06 04:29:17.708430 | orchestrator | Monday 06 April 2026 04:28:44 +0000 (0:00:01.529) 0:02:54.131 ********** 2026-04-06 04:29:17.708448 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:29:17.708466 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:29:17.708483 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:29:17.708500 | orchestrator | 2026-04-06 04:29:17.708518 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-06 04:29:17.708535 | orchestrator | Monday 06 April 2026 04:28:46 +0000 (0:00:01.746) 0:02:55.877 ********** 2026-04-06 04:29:17.708554 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:29:17.708571 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:29:17.708583 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:29:17.708593 | orchestrator | 2026-04-06 04:29:17.708606 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-06 04:29:17.708619 | orchestrator | Monday 06 April 2026 04:28:48 +0000 (0:00:01.762) 0:02:57.640 ********** 2026-04-06 04:29:17.708630 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-06 04:29:17.708642 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-06 04:29:17.708653 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-06 04:29:17.708664 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-06 04:29:17.708674 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-06 04:29:17.708685 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-06 04:29:17.708713 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-06 04:29:17.708726 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-06 04:29:17.708760 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-06 04:29:17.708780 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-06 04:29:17.708798 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-06 04:29:17.708817 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-06 04:29:17.708836 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-06 04:29:17.708855 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-06 04:29:17.708871 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-06 04:29:17.708882 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-06 04:29:17.708893 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-06 04:29:17.708904 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-06 04:29:17.708915 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-06 04:29:17.708926 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-06 04:29:17.708937 | orchestrator | 2026-04-06 04:29:17.708948 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-06 04:29:17.708959 | orchestrator | 2026-04-06 04:29:17.708970 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-06 04:29:17.709126 | orchestrator | Monday 06 April 2026 04:28:53 +0000 (0:00:04.673) 0:03:02.314 ********** 2026-04-06 04:29:17.709143 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:29:17.709155 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:29:17.709166 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:29:17.709177 | orchestrator | 2026-04-06 04:29:17.709188 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-06 04:29:17.709199 | orchestrator | Monday 06 April 2026 04:28:55 +0000 (0:00:01.975) 0:03:04.290 ********** 2026-04-06 04:29:17.709209 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:29:17.709220 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:29:17.709231 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:29:17.709242 | orchestrator | 2026-04-06 04:29:17.709253 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-06 04:29:17.709265 | orchestrator | Monday 06 April 2026 04:28:57 +0000 (0:00:01.883) 0:03:06.173 ********** 2026-04-06 04:29:17.709275 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:29:17.709286 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:29:17.709297 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:29:17.709308 | orchestrator | 2026-04-06 04:29:17.709319 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-06 04:29:17.709330 | orchestrator | Monday 06 April 2026 04:28:58 +0000 (0:00:01.471) 0:03:07.644 ********** 2026-04-06 04:29:17.709341 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 04:29:17.709352 | orchestrator | 2026-04-06 04:29:17.709363 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-06 04:29:17.709374 | orchestrator | Monday 06 April 2026 04:29:00 +0000 (0:00:01.953) 0:03:09.598 ********** 2026-04-06 04:29:17.709385 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:29:17.709396 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:29:17.709407 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:29:17.709418 | orchestrator | 2026-04-06 04:29:17.709429 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-06 04:29:17.709440 | orchestrator | Monday 06 April 2026 04:29:01 +0000 (0:00:01.395) 0:03:10.994 ********** 2026-04-06 04:29:17.709451 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:29:17.709464 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:29:17.709482 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:29:17.709498 | orchestrator | 2026-04-06 04:29:17.709512 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-06 04:29:17.709538 | orchestrator | Monday 06 April 2026 04:29:03 +0000 (0:00:01.420) 0:03:12.414 ********** 2026-04-06 04:29:17.709559 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:29:17.709577 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:29:17.709594 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:29:17.709611 | orchestrator | 2026-04-06 04:29:17.709628 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-06 04:29:17.709645 | orchestrator | Monday 06 April 2026 04:29:04 +0000 (0:00:01.448) 0:03:13.863 ********** 2026-04-06 04:29:17.709663 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:29:17.709682 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:29:17.709699 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:29:17.709718 | orchestrator | 2026-04-06 04:29:17.709736 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-06 04:29:17.709751 | orchestrator | Monday 06 April 2026 04:29:06 +0000 (0:00:01.849) 0:03:15.713 ********** 2026-04-06 04:29:17.709762 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:29:17.709773 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:29:17.709784 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:29:17.709795 | orchestrator | 2026-04-06 04:29:17.709807 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-06 04:29:17.709818 | orchestrator | Monday 06 April 2026 04:29:08 +0000 (0:00:02.297) 0:03:18.010 ********** 2026-04-06 04:29:17.709829 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:29:17.709852 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:29:17.709863 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:29:17.709874 | orchestrator | 2026-04-06 04:29:17.709894 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-06 04:29:17.709906 | orchestrator | Monday 06 April 2026 04:29:11 +0000 (0:00:02.456) 0:03:20.467 ********** 2026-04-06 04:29:17.709932 | orchestrator | changed: [testbed-node-3] 2026-04-06 04:30:33.217828 | orchestrator | changed: [testbed-node-4] 2026-04-06 04:30:33.217914 | orchestrator | changed: [testbed-node-5] 2026-04-06 04:30:33.217922 | orchestrator | 2026-04-06 04:30:33.217930 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-06 04:30:33.217938 | orchestrator | 2026-04-06 04:30:33.217945 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-06 04:30:33.217953 | orchestrator | Monday 06 April 2026 04:29:19 +0000 (0:00:08.433) 0:03:28.900 ********** 2026-04-06 04:30:33.217960 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.217968 | orchestrator | 2026-04-06 04:30:33.217974 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-06 04:30:33.217981 | orchestrator | Monday 06 April 2026 04:29:22 +0000 (0:00:02.297) 0:03:31.198 ********** 2026-04-06 04:30:33.217988 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.217994 | orchestrator | 2026-04-06 04:30:33.218000 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-06 04:30:33.218006 | orchestrator | Monday 06 April 2026 04:29:23 +0000 (0:00:01.450) 0:03:32.648 ********** 2026-04-06 04:30:33.218072 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-06 04:30:33.218081 | orchestrator | 2026-04-06 04:30:33.218087 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-06 04:30:33.218095 | orchestrator | Monday 06 April 2026 04:29:25 +0000 (0:00:01.737) 0:03:34.385 ********** 2026-04-06 04:30:33.218102 | orchestrator | changed: [testbed-manager] 2026-04-06 04:30:33.218109 | orchestrator | 2026-04-06 04:30:33.218116 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-06 04:30:33.218123 | orchestrator | Monday 06 April 2026 04:29:27 +0000 (0:00:02.002) 0:03:36.388 ********** 2026-04-06 04:30:33.218130 | orchestrator | changed: [testbed-manager] 2026-04-06 04:30:33.218136 | orchestrator | 2026-04-06 04:30:33.218143 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-06 04:30:33.218150 | orchestrator | Monday 06 April 2026 04:29:29 +0000 (0:00:01.941) 0:03:38.330 ********** 2026-04-06 04:30:33.218156 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-06 04:30:33.218162 | orchestrator | 2026-04-06 04:30:33.218169 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-06 04:30:33.218176 | orchestrator | Monday 06 April 2026 04:29:32 +0000 (0:00:03.330) 0:03:41.660 ********** 2026-04-06 04:30:33.218183 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-06 04:30:33.218189 | orchestrator | 2026-04-06 04:30:33.218196 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-06 04:30:33.218204 | orchestrator | Monday 06 April 2026 04:29:34 +0000 (0:00:02.019) 0:03:43.680 ********** 2026-04-06 04:30:33.218210 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.218217 | orchestrator | 2026-04-06 04:30:33.218224 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-06 04:30:33.218231 | orchestrator | Monday 06 April 2026 04:29:35 +0000 (0:00:01.459) 0:03:45.139 ********** 2026-04-06 04:30:33.218237 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.218243 | orchestrator | 2026-04-06 04:30:33.218250 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-06 04:30:33.218256 | orchestrator | 2026-04-06 04:30:33.218263 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-06 04:30:33.218269 | orchestrator | Monday 06 April 2026 04:29:38 +0000 (0:00:02.197) 0:03:47.337 ********** 2026-04-06 04:30:33.218276 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.218283 | orchestrator | 2026-04-06 04:30:33.218309 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-06 04:30:33.218316 | orchestrator | Monday 06 April 2026 04:29:39 +0000 (0:00:01.257) 0:03:48.595 ********** 2026-04-06 04:30:33.218322 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-06 04:30:33.218330 | orchestrator | 2026-04-06 04:30:33.218337 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-06 04:30:33.218344 | orchestrator | Monday 06 April 2026 04:29:41 +0000 (0:00:01.707) 0:03:50.302 ********** 2026-04-06 04:30:33.218351 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.218358 | orchestrator | 2026-04-06 04:30:33.218365 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-06 04:30:33.218372 | orchestrator | Monday 06 April 2026 04:29:43 +0000 (0:00:01.997) 0:03:52.300 ********** 2026-04-06 04:30:33.218378 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.218385 | orchestrator | 2026-04-06 04:30:33.218392 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-06 04:30:33.218399 | orchestrator | Monday 06 April 2026 04:29:46 +0000 (0:00:02.898) 0:03:55.198 ********** 2026-04-06 04:30:33.218405 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.218412 | orchestrator | 2026-04-06 04:30:33.218419 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-06 04:30:33.218425 | orchestrator | Monday 06 April 2026 04:29:47 +0000 (0:00:01.591) 0:03:56.790 ********** 2026-04-06 04:30:33.218433 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.218440 | orchestrator | 2026-04-06 04:30:33.218447 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-06 04:30:33.218454 | orchestrator | Monday 06 April 2026 04:29:49 +0000 (0:00:01.517) 0:03:58.308 ********** 2026-04-06 04:30:33.218461 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.218468 | orchestrator | 2026-04-06 04:30:33.218475 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-06 04:30:33.218482 | orchestrator | Monday 06 April 2026 04:29:50 +0000 (0:00:01.811) 0:04:00.119 ********** 2026-04-06 04:30:33.218488 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.218495 | orchestrator | 2026-04-06 04:30:33.218503 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-06 04:30:33.218510 | orchestrator | Monday 06 April 2026 04:29:53 +0000 (0:00:02.808) 0:04:02.927 ********** 2026-04-06 04:30:33.218516 | orchestrator | ok: [testbed-manager] 2026-04-06 04:30:33.218523 | orchestrator | 2026-04-06 04:30:33.218530 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-06 04:30:33.218536 | orchestrator | 2026-04-06 04:30:33.218543 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-06 04:30:33.218590 | orchestrator | Monday 06 April 2026 04:29:55 +0000 (0:00:02.126) 0:04:05.054 ********** 2026-04-06 04:30:33.218598 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:30:33.218606 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:30:33.218613 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:30:33.218620 | orchestrator | 2026-04-06 04:30:33.218627 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-06 04:30:33.218634 | orchestrator | Monday 06 April 2026 04:29:57 +0000 (0:00:01.514) 0:04:06.569 ********** 2026-04-06 04:30:33.218640 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:30:33.218647 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:30:33.218654 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:30:33.218661 | orchestrator | 2026-04-06 04:30:33.218667 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-06 04:30:33.218674 | orchestrator | Monday 06 April 2026 04:29:58 +0000 (0:00:01.507) 0:04:08.076 ********** 2026-04-06 04:30:33.218682 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:30:33.218689 | orchestrator | 2026-04-06 04:30:33.218697 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-06 04:30:33.218710 | orchestrator | Monday 06 April 2026 04:30:01 +0000 (0:00:02.151) 0:04:10.228 ********** 2026-04-06 04:30:33.218716 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-06 04:30:33.218723 | orchestrator | 2026-04-06 04:30:33.218730 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-06 04:30:33.218737 | orchestrator | Monday 06 April 2026 04:30:03 +0000 (0:00:02.065) 0:04:12.294 ********** 2026-04-06 04:30:33.218744 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 04:30:33.218752 | orchestrator | 2026-04-06 04:30:33.218758 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-06 04:30:33.218765 | orchestrator | Monday 06 April 2026 04:30:05 +0000 (0:00:02.041) 0:04:14.336 ********** 2026-04-06 04:30:33.218772 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:30:33.218778 | orchestrator | 2026-04-06 04:30:33.218785 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-06 04:30:33.218791 | orchestrator | Monday 06 April 2026 04:30:06 +0000 (0:00:01.169) 0:04:15.506 ********** 2026-04-06 04:30:33.218797 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 04:30:33.218803 | orchestrator | 2026-04-06 04:30:33.218809 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-06 04:30:33.218816 | orchestrator | Monday 06 April 2026 04:30:08 +0000 (0:00:02.266) 0:04:17.772 ********** 2026-04-06 04:30:33.218823 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 04:30:33.218829 | orchestrator | 2026-04-06 04:30:33.218836 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-06 04:30:33.218842 | orchestrator | Monday 06 April 2026 04:30:10 +0000 (0:00:02.309) 0:04:20.081 ********** 2026-04-06 04:30:33.218848 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 04:30:33.218855 | orchestrator | 2026-04-06 04:30:33.218862 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-06 04:30:33.218868 | orchestrator | Monday 06 April 2026 04:30:12 +0000 (0:00:01.254) 0:04:21.336 ********** 2026-04-06 04:30:33.218875 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 04:30:33.218881 | orchestrator | 2026-04-06 04:30:33.218887 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-06 04:30:33.218894 | orchestrator | Monday 06 April 2026 04:30:13 +0000 (0:00:01.740) 0:04:23.077 ********** 2026-04-06 04:30:33.218900 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-04-06 04:30:33.218907 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-04-06 04:30:33.218915 | orchestrator | } 2026-04-06 04:30:33.218922 | orchestrator | 2026-04-06 04:30:33.218928 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-06 04:30:33.218935 | orchestrator | Monday 06 April 2026 04:30:15 +0000 (0:00:01.247) 0:04:24.325 ********** 2026-04-06 04:30:33.218942 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:30:33.218949 | orchestrator | 2026-04-06 04:30:33.218956 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-06 04:30:33.218963 | orchestrator | Monday 06 April 2026 04:30:16 +0000 (0:00:01.237) 0:04:25.562 ********** 2026-04-06 04:30:33.218969 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-06 04:30:33.218975 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-06 04:30:33.218981 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-06 04:30:33.218987 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-06 04:30:33.218994 | orchestrator | 2026-04-06 04:30:33.219001 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-06 04:30:33.219007 | orchestrator | Monday 06 April 2026 04:30:22 +0000 (0:00:06.394) 0:04:31.956 ********** 2026-04-06 04:30:33.219014 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 04:30:33.219021 | orchestrator | 2026-04-06 04:30:33.219027 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-06 04:30:33.219051 | orchestrator | Monday 06 April 2026 04:30:25 +0000 (0:00:02.627) 0:04:34.584 ********** 2026-04-06 04:30:33.219058 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-06 04:30:33.219064 | orchestrator | 2026-04-06 04:30:33.219070 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-06 04:30:33.219077 | orchestrator | Monday 06 April 2026 04:30:28 +0000 (0:00:03.009) 0:04:37.593 ********** 2026-04-06 04:30:33.219084 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-06 04:30:33.219090 | orchestrator | 2026-04-06 04:30:33.219096 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-06 04:30:33.219106 | orchestrator | Monday 06 April 2026 04:30:32 +0000 (0:00:04.485) 0:04:42.079 ********** 2026-04-06 04:30:33.219112 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:30:33.219117 | orchestrator | 2026-04-06 04:30:33.219127 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-06 04:31:06.856949 | orchestrator | Monday 06 April 2026 04:30:34 +0000 (0:00:01.193) 0:04:43.272 ********** 2026-04-06 04:31:06.857124 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-06 04:31:06.857152 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-06 04:31:06.857173 | orchestrator | 2026-04-06 04:31:06.857194 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-06 04:31:06.857213 | orchestrator | Monday 06 April 2026 04:30:37 +0000 (0:00:03.197) 0:04:46.470 ********** 2026-04-06 04:31:06.857231 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:31:06.857252 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:31:06.857271 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:31:06.857283 | orchestrator | 2026-04-06 04:31:06.857294 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-06 04:31:06.857306 | orchestrator | Monday 06 April 2026 04:30:38 +0000 (0:00:01.550) 0:04:48.021 ********** 2026-04-06 04:31:06.857317 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:31:06.857329 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:31:06.857340 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:31:06.857351 | orchestrator | 2026-04-06 04:31:06.857362 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-06 04:31:06.857373 | orchestrator | 2026-04-06 04:31:06.857384 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-06 04:31:06.857395 | orchestrator | Monday 06 April 2026 04:30:41 +0000 (0:00:02.523) 0:04:50.545 ********** 2026-04-06 04:31:06.857406 | orchestrator | ok: [testbed-manager] 2026-04-06 04:31:06.857417 | orchestrator | 2026-04-06 04:31:06.857428 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-06 04:31:06.857439 | orchestrator | Monday 06 April 2026 04:30:42 +0000 (0:00:01.171) 0:04:51.716 ********** 2026-04-06 04:31:06.857450 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-06 04:31:06.857462 | orchestrator | 2026-04-06 04:31:06.857474 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-06 04:31:06.857486 | orchestrator | Monday 06 April 2026 04:30:44 +0000 (0:00:01.557) 0:04:53.274 ********** 2026-04-06 04:31:06.857499 | orchestrator | ok: [testbed-manager] 2026-04-06 04:31:06.857513 | orchestrator | 2026-04-06 04:31:06.857526 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-06 04:31:06.857539 | orchestrator | 2026-04-06 04:31:06.857553 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-06 04:31:06.857566 | orchestrator | Monday 06 April 2026 04:30:49 +0000 (0:00:05.145) 0:04:58.419 ********** 2026-04-06 04:31:06.857580 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:31:06.857593 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:31:06.857605 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:31:06.857618 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:31:06.857631 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:31:06.857644 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:31:06.857683 | orchestrator | 2026-04-06 04:31:06.857696 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-06 04:31:06.857710 | orchestrator | Monday 06 April 2026 04:30:51 +0000 (0:00:01.900) 0:05:00.319 ********** 2026-04-06 04:31:06.857724 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-06 04:31:06.857736 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-06 04:31:06.857749 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-06 04:31:06.857762 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-06 04:31:06.857775 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-06 04:31:06.857788 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-06 04:31:06.857801 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-06 04:31:06.857815 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-06 04:31:06.857828 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-06 04:31:06.857840 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-06 04:31:06.857852 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-06 04:31:06.857862 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-06 04:31:06.857873 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-06 04:31:06.857884 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-06 04:31:06.857895 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-06 04:31:06.857906 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-06 04:31:06.857917 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-06 04:31:06.857928 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-06 04:31:06.857938 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-06 04:31:06.857950 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-06 04:31:06.857961 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-06 04:31:06.857990 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-06 04:31:06.858002 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-06 04:31:06.858013 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-06 04:31:06.858169 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-06 04:31:06.858190 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-06 04:31:06.858210 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-06 04:31:06.858269 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-06 04:31:06.858283 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-06 04:31:06.858294 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-06 04:31:06.858305 | orchestrator | 2026-04-06 04:31:06.858316 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-06 04:31:06.858327 | orchestrator | Monday 06 April 2026 04:31:02 +0000 (0:00:11.008) 0:05:11.328 ********** 2026-04-06 04:31:06.858350 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:31:06.858361 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:31:06.858372 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:31:06.858383 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:31:06.858394 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:31:06.858404 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:31:06.858415 | orchestrator | 2026-04-06 04:31:06.858441 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-06 04:31:06.858453 | orchestrator | Monday 06 April 2026 04:31:04 +0000 (0:00:01.854) 0:05:13.182 ********** 2026-04-06 04:31:06.858464 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:31:06.858475 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:31:06.858486 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:31:06.858496 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:31:06.858507 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:31:06.858518 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:31:06.858529 | orchestrator | 2026-04-06 04:31:06.858540 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:31:06.858551 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 04:31:06.858565 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-06 04:31:06.858576 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-06 04:31:06.858587 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-06 04:31:06.858598 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 04:31:06.858609 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 04:31:06.858620 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 04:31:06.858631 | orchestrator | 2026-04-06 04:31:06.858642 | orchestrator | 2026-04-06 04:31:06.858653 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:31:06.858664 | orchestrator | Monday 06 April 2026 04:31:06 +0000 (0:00:02.806) 0:05:15.989 ********** 2026-04-06 04:31:06.858681 | orchestrator | =============================================================================== 2026-04-06 04:31:06.858697 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 33.82s 2026-04-06 04:31:06.858723 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.89s 2026-04-06 04:31:06.858739 | orchestrator | Manage labels ---------------------------------------------------------- 11.01s 2026-04-06 04:31:06.858755 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.43s 2026-04-06 04:31:06.858770 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 6.39s 2026-04-06 04:31:06.858787 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 5.60s 2026-04-06 04:31:06.858804 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.15s 2026-04-06 04:31:06.858820 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.67s 2026-04-06 04:31:06.858837 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.49s 2026-04-06 04:31:06.858849 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.82s 2026-04-06 04:31:06.858874 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.33s 2026-04-06 04:31:06.858884 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.26s 2026-04-06 04:31:06.858905 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.20s 2026-04-06 04:31:07.236007 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 3.01s 2026-04-06 04:31:07.236190 | orchestrator | k3s_prereq : Add br_netfilter to /etc/modules-load.d/ ------------------- 3.00s 2026-04-06 04:31:07.236207 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.99s 2026-04-06 04:31:07.236219 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.93s 2026-04-06 04:31:07.236231 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.90s 2026-04-06 04:31:07.236242 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.81s 2026-04-06 04:31:07.236253 | orchestrator | kubectl : Install required packages ------------------------------------- 2.81s 2026-04-06 04:31:07.460911 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-06 04:31:07.461016 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-04-06 04:31:07.468637 | orchestrator | + set -e 2026-04-06 04:31:07.468720 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 04:31:07.468728 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 04:31:07.468735 | orchestrator | ++ INTERACTIVE=false 2026-04-06 04:31:07.468740 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 04:31:07.468745 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 04:31:07.468750 | orchestrator | + osism apply openstackclient 2026-04-06 04:31:18.948770 | orchestrator | 2026-04-06 04:31:18 | INFO  | Prepare task for execution of openstackclient. 2026-04-06 04:31:19.031856 | orchestrator | 2026-04-06 04:31:19 | INFO  | Task 59872024-98a3-4e50-a3c8-6dbfcc99e93a (openstackclient) was prepared for execution. 2026-04-06 04:31:19.031936 | orchestrator | 2026-04-06 04:31:19 | INFO  | It takes a moment until task 59872024-98a3-4e50-a3c8-6dbfcc99e93a (openstackclient) has been started and output is visible here. 2026-04-06 04:31:56.098584 | orchestrator | 2026-04-06 04:31:56.098667 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-06 04:31:56.098677 | orchestrator | 2026-04-06 04:31:56.098684 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-06 04:31:56.098691 | orchestrator | Monday 06 April 2026 04:31:25 +0000 (0:00:02.203) 0:00:02.203 ********** 2026-04-06 04:31:56.098700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-06 04:31:56.098707 | orchestrator | 2026-04-06 04:31:56.098713 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-06 04:31:56.098720 | orchestrator | Monday 06 April 2026 04:31:27 +0000 (0:00:02.104) 0:00:04.307 ********** 2026-04-06 04:31:56.098727 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-06 04:31:56.098737 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-06 04:31:56.098745 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-06 04:31:56.098752 | orchestrator | 2026-04-06 04:31:56.098759 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-06 04:31:56.098766 | orchestrator | Monday 06 April 2026 04:31:29 +0000 (0:00:02.806) 0:00:07.114 ********** 2026-04-06 04:31:56.098773 | orchestrator | changed: [testbed-manager] 2026-04-06 04:31:56.098779 | orchestrator | 2026-04-06 04:31:56.098786 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-06 04:31:56.098793 | orchestrator | Monday 06 April 2026 04:31:32 +0000 (0:00:02.397) 0:00:09.512 ********** 2026-04-06 04:31:56.098800 | orchestrator | ok: [testbed-manager] 2026-04-06 04:31:56.098807 | orchestrator | 2026-04-06 04:31:56.098814 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-06 04:31:56.098843 | orchestrator | Monday 06 April 2026 04:31:34 +0000 (0:00:02.163) 0:00:11.676 ********** 2026-04-06 04:31:56.098849 | orchestrator | ok: [testbed-manager] 2026-04-06 04:31:56.098855 | orchestrator | 2026-04-06 04:31:56.098862 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-06 04:31:56.098869 | orchestrator | Monday 06 April 2026 04:31:36 +0000 (0:00:02.238) 0:00:13.914 ********** 2026-04-06 04:31:56.098875 | orchestrator | ok: [testbed-manager] 2026-04-06 04:31:56.098881 | orchestrator | 2026-04-06 04:31:56.098888 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-06 04:31:56.098894 | orchestrator | Monday 06 April 2026 04:31:38 +0000 (0:00:02.104) 0:00:16.019 ********** 2026-04-06 04:31:56.098900 | orchestrator | changed: [testbed-manager] 2026-04-06 04:31:56.098906 | orchestrator | 2026-04-06 04:31:56.098913 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-06 04:31:56.098920 | orchestrator | Monday 06 April 2026 04:31:50 +0000 (0:00:11.247) 0:00:27.266 ********** 2026-04-06 04:31:56.098926 | orchestrator | changed: [testbed-manager] 2026-04-06 04:31:56.098931 | orchestrator | 2026-04-06 04:31:56.098935 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-06 04:31:56.098940 | orchestrator | Monday 06 April 2026 04:31:52 +0000 (0:00:01.923) 0:00:29.190 ********** 2026-04-06 04:31:56.098946 | orchestrator | changed: [testbed-manager] 2026-04-06 04:31:56.098951 | orchestrator | 2026-04-06 04:31:56.098958 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-06 04:31:56.098964 | orchestrator | Monday 06 April 2026 04:31:53 +0000 (0:00:01.616) 0:00:30.807 ********** 2026-04-06 04:31:56.098970 | orchestrator | ok: [testbed-manager] 2026-04-06 04:31:56.098976 | orchestrator | 2026-04-06 04:31:56.098982 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:31:56.098988 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 04:31:56.098995 | orchestrator | 2026-04-06 04:31:56.099000 | orchestrator | 2026-04-06 04:31:56.099007 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:31:56.099013 | orchestrator | Monday 06 April 2026 04:31:55 +0000 (0:00:01.992) 0:00:32.800 ********** 2026-04-06 04:31:56.099019 | orchestrator | =============================================================================== 2026-04-06 04:31:56.099026 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 11.25s 2026-04-06 04:31:56.099032 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.81s 2026-04-06 04:31:56.099038 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.40s 2026-04-06 04:31:56.099045 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.24s 2026-04-06 04:31:56.099051 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.16s 2026-04-06 04:31:56.099057 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 2.10s 2026-04-06 04:31:56.099063 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.10s 2026-04-06 04:31:56.099068 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.99s 2026-04-06 04:31:56.099118 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.92s 2026-04-06 04:31:56.099125 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.62s 2026-04-06 04:31:56.332937 | orchestrator | + osism apply -a upgrade common 2026-04-06 04:31:57.851695 | orchestrator | 2026-04-06 04:31:57 | INFO  | Prepare task for execution of common. 2026-04-06 04:31:57.930358 | orchestrator | 2026-04-06 04:31:57 | INFO  | Task eb8f8457-8393-4265-a7ca-841298154308 (common) was prepared for execution. 2026-04-06 04:31:57.930449 | orchestrator | 2026-04-06 04:31:57 | INFO  | It takes a moment until task eb8f8457-8393-4265-a7ca-841298154308 (common) has been started and output is visible here. 2026-04-06 04:32:17.883231 | orchestrator | 2026-04-06 04:32:17.883340 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-06 04:32:17.883356 | orchestrator | 2026-04-06 04:32:17.883368 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-06 04:32:17.883379 | orchestrator | Monday 06 April 2026 04:32:03 +0000 (0:00:02.327) 0:00:02.327 ********** 2026-04-06 04:32:17.883391 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 04:32:17.883403 | orchestrator | 2026-04-06 04:32:17.883414 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-06 04:32:17.883425 | orchestrator | Monday 06 April 2026 04:32:07 +0000 (0:00:03.422) 0:00:05.750 ********** 2026-04-06 04:32:17.883445 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 04:32:17.883464 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 04:32:17.883495 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 04:32:17.883514 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 04:32:17.883532 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 04:32:17.883549 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 04:32:17.883567 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 04:32:17.883585 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 04:32:17.883604 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 04:32:17.883622 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 04:32:17.883640 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 04:32:17.883660 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 04:32:17.883681 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 04:32:17.883701 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 04:32:17.883722 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 04:32:17.883737 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 04:32:17.883748 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-06 04:32:17.883759 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 04:32:17.883770 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-06 04:32:17.883800 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 04:32:17.883812 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-06 04:32:17.883823 | orchestrator | 2026-04-06 04:32:17.883834 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-06 04:32:17.883845 | orchestrator | Monday 06 April 2026 04:32:12 +0000 (0:00:05.176) 0:00:10.927 ********** 2026-04-06 04:32:17.883861 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 04:32:17.883874 | orchestrator | 2026-04-06 04:32:17.883885 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-06 04:32:17.883895 | orchestrator | Monday 06 April 2026 04:32:15 +0000 (0:00:02.879) 0:00:13.806 ********** 2026-04-06 04:32:17.883910 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:17.883955 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:17.883989 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:17.884002 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:17.884013 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:17.884025 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:17.884042 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:17.884054 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:17.884073 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:17.884122 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:22.790792 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:22.790890 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:22.790900 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:22.790907 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:22.790915 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:22.790945 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:22.790955 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:22.790962 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:22.790985 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:22.790993 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:22.791000 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:22.791007 | orchestrator | 2026-04-06 04:32:22.791198 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-06 04:32:22.791209 | orchestrator | Monday 06 April 2026 04:32:22 +0000 (0:00:06.840) 0:00:20.647 ********** 2026-04-06 04:32:22.791218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:22.791234 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:22.791242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:22.791250 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:22.791265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:23.789143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:23.789235 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:23.789245 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:32:23.789270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:23.789297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:23.789305 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:32:23.789313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:23.789320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:23.789328 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:32:23.789336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:23.789362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:23.789370 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:32:23.789377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:23.789388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:23.789401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:23.789408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:23.789415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:23.789423 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:32:23.789430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:23.789441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:26.454949 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:32:26.455023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:26.455031 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:32:26.455035 | orchestrator | 2026-04-06 04:32:26.455040 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-06 04:32:26.455063 | orchestrator | Monday 06 April 2026 04:32:25 +0000 (0:00:02.894) 0:00:23.541 ********** 2026-04-06 04:32:26.455070 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:26.455076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:26.455124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:26.455130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:26.455135 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:26.455139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:26.455143 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:32:26.455156 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:26.455168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:26.455172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:26.455176 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:32:26.455180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:26.455184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:26.455188 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:32:26.455192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:26.455197 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:32:26.455201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:26.455212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:39.544118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:32:39.544226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:39.544239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:39.544248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:39.544257 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:32:39.544266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:39.544272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:39.544279 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:32:39.544286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:32:39.544309 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:32:39.544316 | orchestrator | 2026-04-06 04:32:39.544323 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-06 04:32:39.544331 | orchestrator | Monday 06 April 2026 04:32:28 +0000 (0:00:03.268) 0:00:26.810 ********** 2026-04-06 04:32:39.544349 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:32:39.544356 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:32:39.544363 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:32:39.544369 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:32:39.544375 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:32:39.544381 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:32:39.544387 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:32:39.544394 | orchestrator | 2026-04-06 04:32:39.544400 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-06 04:32:39.544406 | orchestrator | Monday 06 April 2026 04:32:30 +0000 (0:00:02.239) 0:00:29.050 ********** 2026-04-06 04:32:39.544413 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:32:39.544419 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:32:39.544425 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:32:39.544431 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:32:39.544437 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:32:39.544444 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:32:39.544450 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:32:39.544456 | orchestrator | 2026-04-06 04:32:39.544466 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-06 04:32:39.544472 | orchestrator | Monday 06 April 2026 04:32:32 +0000 (0:00:02.036) 0:00:31.086 ********** 2026-04-06 04:32:39.544479 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:32:39.544485 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:32:39.544491 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:32:39.544497 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:32:39.544503 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:32:39.544509 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:32:39.544516 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:32:39.544522 | orchestrator | 2026-04-06 04:32:39.544528 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-06 04:32:39.544534 | orchestrator | Monday 06 April 2026 04:32:34 +0000 (0:00:02.207) 0:00:33.293 ********** 2026-04-06 04:32:39.544542 | orchestrator | changed: [testbed-manager] 2026-04-06 04:32:39.544553 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:32:39.544564 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:32:39.544573 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:32:39.544584 | orchestrator | changed: [testbed-node-3] 2026-04-06 04:32:39.544594 | orchestrator | changed: [testbed-node-4] 2026-04-06 04:32:39.544603 | orchestrator | changed: [testbed-node-5] 2026-04-06 04:32:39.544614 | orchestrator | 2026-04-06 04:32:39.544624 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-06 04:32:39.544635 | orchestrator | Monday 06 April 2026 04:32:38 +0000 (0:00:03.106) 0:00:36.400 ********** 2026-04-06 04:32:39.544648 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:39.544661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:39.544680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:39.544688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:39.544705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:43.936473 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936621 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:43.936702 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:32:43.936710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:32:43.936740 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:06.763815 | orchestrator | 2026-04-06 04:33:06.763939 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-06 04:33:06.763958 | orchestrator | Monday 06 April 2026 04:32:45 +0000 (0:00:06.999) 0:00:43.400 ********** 2026-04-06 04:33:06.763969 | orchestrator | [WARNING]: Skipped 2026-04-06 04:33:06.763982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-06 04:33:06.763994 | orchestrator | to this access issue: 2026-04-06 04:33:06.764005 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-06 04:33:06.764017 | orchestrator | directory 2026-04-06 04:33:06.764028 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 04:33:06.764040 | orchestrator | 2026-04-06 04:33:06.764051 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-06 04:33:06.764062 | orchestrator | Monday 06 April 2026 04:32:47 +0000 (0:00:02.466) 0:00:45.867 ********** 2026-04-06 04:33:06.764073 | orchestrator | [WARNING]: Skipped 2026-04-06 04:33:06.764143 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-06 04:33:06.764170 | orchestrator | to this access issue: 2026-04-06 04:33:06.764188 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-06 04:33:06.764207 | orchestrator | directory 2026-04-06 04:33:06.764223 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 04:33:06.764240 | orchestrator | 2026-04-06 04:33:06.764257 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-06 04:33:06.764275 | orchestrator | Monday 06 April 2026 04:32:49 +0000 (0:00:02.183) 0:00:48.051 ********** 2026-04-06 04:33:06.764291 | orchestrator | [WARNING]: Skipped 2026-04-06 04:33:06.764307 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-06 04:33:06.764323 | orchestrator | to this access issue: 2026-04-06 04:33:06.764341 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-06 04:33:06.764357 | orchestrator | directory 2026-04-06 04:33:06.764374 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 04:33:06.764391 | orchestrator | 2026-04-06 04:33:06.764410 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-06 04:33:06.764428 | orchestrator | Monday 06 April 2026 04:32:51 +0000 (0:00:02.307) 0:00:50.358 ********** 2026-04-06 04:33:06.764448 | orchestrator | [WARNING]: Skipped 2026-04-06 04:33:06.764464 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-06 04:33:06.764482 | orchestrator | to this access issue: 2026-04-06 04:33:06.764500 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-06 04:33:06.764542 | orchestrator | directory 2026-04-06 04:33:06.764575 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 04:33:06.764595 | orchestrator | 2026-04-06 04:33:06.764615 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-06 04:33:06.764630 | orchestrator | Monday 06 April 2026 04:32:53 +0000 (0:00:02.014) 0:00:52.373 ********** 2026-04-06 04:33:06.764643 | orchestrator | changed: [testbed-manager] 2026-04-06 04:33:06.764656 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:33:06.764669 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:33:06.764682 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:33:06.764695 | orchestrator | changed: [testbed-node-3] 2026-04-06 04:33:06.764708 | orchestrator | changed: [testbed-node-4] 2026-04-06 04:33:06.764721 | orchestrator | changed: [testbed-node-5] 2026-04-06 04:33:06.764733 | orchestrator | 2026-04-06 04:33:06.764744 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-06 04:33:06.764755 | orchestrator | Monday 06 April 2026 04:32:58 +0000 (0:00:04.682) 0:00:57.055 ********** 2026-04-06 04:33:06.764766 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 04:33:06.764778 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 04:33:06.764788 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 04:33:06.764799 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 04:33:06.764810 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 04:33:06.764821 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 04:33:06.764831 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-06 04:33:06.764842 | orchestrator | 2026-04-06 04:33:06.764853 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-06 04:33:06.764864 | orchestrator | Monday 06 April 2026 04:33:02 +0000 (0:00:03.940) 0:01:00.995 ********** 2026-04-06 04:33:06.764875 | orchestrator | ok: [testbed-manager] 2026-04-06 04:33:06.764898 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:33:06.764910 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:33:06.764920 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:33:06.764931 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:33:06.764942 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:33:06.764953 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:33:06.764963 | orchestrator | 2026-04-06 04:33:06.764974 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-06 04:33:06.764985 | orchestrator | Monday 06 April 2026 04:33:05 +0000 (0:00:03.285) 0:01:04.281 ********** 2026-04-06 04:33:06.765038 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:06.765055 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:06.765068 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:06.765079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:06.765091 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:06.765102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:06.765148 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:06.765169 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:14.148614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:14.148737 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:14.148755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:14.148769 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:14.148783 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:14.148815 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:14.148827 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:14.148844 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:14.148873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:14.148886 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:14.148897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:14.148908 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:14.148920 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:14.148940 | orchestrator | 2026-04-06 04:33:14.148953 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-06 04:33:14.148965 | orchestrator | Monday 06 April 2026 04:33:08 +0000 (0:00:02.992) 0:01:07.274 ********** 2026-04-06 04:33:14.148976 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 04:33:14.148987 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 04:33:14.148998 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 04:33:14.149009 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 04:33:14.149019 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 04:33:14.149030 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 04:33:14.149041 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-06 04:33:14.149052 | orchestrator | 2026-04-06 04:33:14.149062 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-06 04:33:14.149073 | orchestrator | Monday 06 April 2026 04:33:12 +0000 (0:00:03.504) 0:01:10.778 ********** 2026-04-06 04:33:14.149084 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 04:33:14.149095 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 04:33:14.149105 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 04:33:14.149160 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 04:33:14.149211 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 04:33:19.082853 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 04:33:19.082956 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-06 04:33:19.082972 | orchestrator | 2026-04-06 04:33:19.082988 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-06 04:33:19.083001 | orchestrator | Monday 06 April 2026 04:33:16 +0000 (0:00:03.808) 0:01:14.586 ********** 2026-04-06 04:33:19.083015 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:19.083030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:19.083041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:19.083078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:19.083091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:19.083102 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:19.083198 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:19.083212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:19.083224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:19.083235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:19.083257 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:19.083272 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:19.083286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:19.083301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:19.083322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:24.269900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:24.270005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-06 04:33:24.270089 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:24.270105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:24.270147 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:24.270161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 04:33:24.270174 | orchestrator | 2026-04-06 04:33:24.270196 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-06 04:33:24.270209 | orchestrator | Monday 06 April 2026 04:33:21 +0000 (0:00:05.711) 0:01:20.298 ********** 2026-04-06 04:33:24.270221 | orchestrator | changed: [testbed-manager] => { 2026-04-06 04:33:24.270233 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:33:24.270244 | orchestrator | } 2026-04-06 04:33:24.270255 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 04:33:24.270266 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:33:24.270277 | orchestrator | } 2026-04-06 04:33:24.270288 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 04:33:24.270316 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:33:24.270327 | orchestrator | } 2026-04-06 04:33:24.270338 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 04:33:24.270348 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:33:24.270359 | orchestrator | } 2026-04-06 04:33:24.270375 | orchestrator | changed: [testbed-node-3] => { 2026-04-06 04:33:24.270386 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:33:24.270396 | orchestrator | } 2026-04-06 04:33:24.270407 | orchestrator | changed: [testbed-node-4] => { 2026-04-06 04:33:24.270417 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:33:24.270428 | orchestrator | } 2026-04-06 04:33:24.270442 | orchestrator | changed: [testbed-node-5] => { 2026-04-06 04:33:24.270454 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:33:24.270468 | orchestrator | } 2026-04-06 04:33:24.270479 | orchestrator | 2026-04-06 04:33:24.270509 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 04:33:24.270523 | orchestrator | Monday 06 April 2026 04:33:23 +0000 (0:00:01.996) 0:01:22.294 ********** 2026-04-06 04:33:24.270545 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:33:24.270559 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:24.270574 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:24.270587 | orchestrator | skipping: [testbed-manager] 2026-04-06 04:33:24.270600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:33:24.270614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:24.270627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:24.270640 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:33:24.270658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:33:24.270687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:29.764760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:29.764913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:33:29.764939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:29.764954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:29.764967 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:33:29.764980 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:33:29.764992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:33:29.765022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:29.765063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:29.765075 | orchestrator | skipping: [testbed-node-3] 2026-04-06 04:33:29.765108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:33:29.765154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:29.765166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:29.765178 | orchestrator | skipping: [testbed-node-4] 2026-04-06 04:33:29.765189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-06 04:33:29.765204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:29.765224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:33:29.765253 | orchestrator | skipping: [testbed-node-5] 2026-04-06 04:33:29.765274 | orchestrator | 2026-04-06 04:33:29.765295 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 04:33:29.765315 | orchestrator | Monday 06 April 2026 04:33:27 +0000 (0:00:03.171) 0:01:25.466 ********** 2026-04-06 04:33:29.765335 | orchestrator | 2026-04-06 04:33:29.765364 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 04:33:29.765383 | orchestrator | Monday 06 April 2026 04:33:27 +0000 (0:00:00.466) 0:01:25.933 ********** 2026-04-06 04:33:29.765397 | orchestrator | 2026-04-06 04:33:29.765410 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 04:33:29.765423 | orchestrator | Monday 06 April 2026 04:33:27 +0000 (0:00:00.455) 0:01:26.388 ********** 2026-04-06 04:33:29.765436 | orchestrator | 2026-04-06 04:33:29.765449 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 04:33:29.765462 | orchestrator | Monday 06 April 2026 04:33:28 +0000 (0:00:00.434) 0:01:26.823 ********** 2026-04-06 04:33:29.765474 | orchestrator | 2026-04-06 04:33:29.765487 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 04:33:29.765499 | orchestrator | Monday 06 April 2026 04:33:28 +0000 (0:00:00.417) 0:01:27.240 ********** 2026-04-06 04:33:29.765512 | orchestrator | 2026-04-06 04:33:29.765524 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 04:33:29.765536 | orchestrator | Monday 06 April 2026 04:33:29 +0000 (0:00:00.480) 0:01:27.720 ********** 2026-04-06 04:33:29.765549 | orchestrator | 2026-04-06 04:33:29.765562 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-06 04:33:29.765585 | orchestrator | Monday 06 April 2026 04:33:29 +0000 (0:00:00.436) 0:01:28.157 ********** 2026-04-06 04:36:10.654628 | orchestrator | 2026-04-06 04:36:10.654776 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-06 04:36:10.654804 | orchestrator | Monday 06 April 2026 04:33:30 +0000 (0:00:00.940) 0:01:29.098 ********** 2026-04-06 04:36:10.654824 | orchestrator | changed: [testbed-manager] 2026-04-06 04:36:10.654846 | orchestrator | changed: [testbed-node-3] 2026-04-06 04:36:10.654865 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:36:10.654883 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:36:10.654897 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:36:10.654907 | orchestrator | changed: [testbed-node-4] 2026-04-06 04:36:10.654918 | orchestrator | changed: [testbed-node-5] 2026-04-06 04:36:10.654929 | orchestrator | 2026-04-06 04:36:10.654941 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-06 04:36:10.654953 | orchestrator | Monday 06 April 2026 04:34:42 +0000 (0:01:11.852) 0:02:40.951 ********** 2026-04-06 04:36:10.654964 | orchestrator | changed: [testbed-manager] 2026-04-06 04:36:10.654975 | orchestrator | changed: [testbed-node-3] 2026-04-06 04:36:10.654985 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:36:10.654996 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:36:10.655008 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:36:10.655019 | orchestrator | changed: [testbed-node-4] 2026-04-06 04:36:10.655030 | orchestrator | changed: [testbed-node-5] 2026-04-06 04:36:10.655040 | orchestrator | 2026-04-06 04:36:10.655051 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-06 04:36:10.655062 | orchestrator | Monday 06 April 2026 04:35:47 +0000 (0:01:05.085) 0:03:46.037 ********** 2026-04-06 04:36:10.655073 | orchestrator | ok: [testbed-manager] 2026-04-06 04:36:10.655085 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:36:10.655096 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:36:10.655106 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:36:10.655117 | orchestrator | ok: [testbed-node-3] 2026-04-06 04:36:10.655128 | orchestrator | ok: [testbed-node-4] 2026-04-06 04:36:10.655141 | orchestrator | ok: [testbed-node-5] 2026-04-06 04:36:10.655154 | orchestrator | 2026-04-06 04:36:10.655167 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-06 04:36:10.655240 | orchestrator | Monday 06 April 2026 04:35:51 +0000 (0:00:03.423) 0:03:49.460 ********** 2026-04-06 04:36:10.655253 | orchestrator | changed: [testbed-manager] 2026-04-06 04:36:10.655266 | orchestrator | changed: [testbed-node-3] 2026-04-06 04:36:10.655279 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:36:10.655291 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:36:10.655304 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:36:10.655317 | orchestrator | changed: [testbed-node-4] 2026-04-06 04:36:10.655329 | orchestrator | changed: [testbed-node-5] 2026-04-06 04:36:10.655341 | orchestrator | 2026-04-06 04:36:10.655354 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:36:10.655368 | orchestrator | testbed-manager : ok=22  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 04:36:10.655382 | orchestrator | testbed-node-0 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 04:36:10.655395 | orchestrator | testbed-node-1 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 04:36:10.655408 | orchestrator | testbed-node-2 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 04:36:10.655421 | orchestrator | testbed-node-3 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 04:36:10.655434 | orchestrator | testbed-node-4 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 04:36:10.655447 | orchestrator | testbed-node-5 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 04:36:10.655460 | orchestrator | 2026-04-06 04:36:10.655473 | orchestrator | 2026-04-06 04:36:10.655486 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:36:10.655499 | orchestrator | Monday 06 April 2026 04:36:10 +0000 (0:00:19.113) 0:04:08.574 ********** 2026-04-06 04:36:10.655512 | orchestrator | =============================================================================== 2026-04-06 04:36:10.655539 | orchestrator | common : Restart fluentd container ------------------------------------- 71.85s 2026-04-06 04:36:10.655551 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 65.09s 2026-04-06 04:36:10.655562 | orchestrator | common : Restart cron container ---------------------------------------- 19.11s 2026-04-06 04:36:10.655573 | orchestrator | common : Copying over config.json files for services -------------------- 7.00s 2026-04-06 04:36:10.655583 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.84s 2026-04-06 04:36:10.655594 | orchestrator | service-check-containers : common | Check containers -------------------- 5.71s 2026-04-06 04:36:10.655605 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.18s 2026-04-06 04:36:10.655615 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.68s 2026-04-06 04:36:10.655626 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.94s 2026-04-06 04:36:10.655637 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.81s 2026-04-06 04:36:10.655647 | orchestrator | common : Flush handlers ------------------------------------------------- 3.63s 2026-04-06 04:36:10.655658 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.50s 2026-04-06 04:36:10.655689 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.42s 2026-04-06 04:36:10.655700 | orchestrator | common : include_tasks -------------------------------------------------- 3.42s 2026-04-06 04:36:10.655711 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.29s 2026-04-06 04:36:10.655730 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.27s 2026-04-06 04:36:10.655741 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.17s 2026-04-06 04:36:10.655752 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.11s 2026-04-06 04:36:10.655763 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.99s 2026-04-06 04:36:10.655774 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.89s 2026-04-06 04:36:10.837839 | orchestrator | + osism apply -a upgrade loadbalancer 2026-04-06 04:36:12.229682 | orchestrator | 2026-04-06 04:36:12 | INFO  | Prepare task for execution of loadbalancer. 2026-04-06 04:36:12.297038 | orchestrator | 2026-04-06 04:36:12 | INFO  | Task 72faa81c-0560-4e9d-8d76-d4d9d7d3a6f6 (loadbalancer) was prepared for execution. 2026-04-06 04:36:12.297119 | orchestrator | 2026-04-06 04:36:12 | INFO  | It takes a moment until task 72faa81c-0560-4e9d-8d76-d4d9d7d3a6f6 (loadbalancer) has been started and output is visible here. 2026-04-06 04:36:45.902084 | orchestrator | 2026-04-06 04:36:45.902257 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 04:36:45.902276 | orchestrator | 2026-04-06 04:36:45.902288 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 04:36:45.902300 | orchestrator | Monday 06 April 2026 04:36:17 +0000 (0:00:01.383) 0:00:01.383 ********** 2026-04-06 04:36:45.902311 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:36:45.902323 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:36:45.902334 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:36:45.902345 | orchestrator | 2026-04-06 04:36:45.902356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 04:36:45.902367 | orchestrator | Monday 06 April 2026 04:36:19 +0000 (0:00:01.904) 0:00:03.288 ********** 2026-04-06 04:36:45.902379 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-06 04:36:45.902391 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-06 04:36:45.902401 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-06 04:36:45.902412 | orchestrator | 2026-04-06 04:36:45.902423 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-06 04:36:45.902434 | orchestrator | 2026-04-06 04:36:45.902445 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-06 04:36:45.902456 | orchestrator | Monday 06 April 2026 04:36:22 +0000 (0:00:03.261) 0:00:06.550 ********** 2026-04-06 04:36:45.902467 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:36:45.902478 | orchestrator | 2026-04-06 04:36:45.902489 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-04-06 04:36:45.902500 | orchestrator | Monday 06 April 2026 04:36:25 +0000 (0:00:02.846) 0:00:09.397 ********** 2026-04-06 04:36:45.902511 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:36:45.902522 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:36:45.902533 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:36:45.902544 | orchestrator | 2026-04-06 04:36:45.902555 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-04-06 04:36:45.902569 | orchestrator | Monday 06 April 2026 04:36:27 +0000 (0:00:02.288) 0:00:11.686 ********** 2026-04-06 04:36:45.902582 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:36:45.902596 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:36:45.902608 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:36:45.902620 | orchestrator | 2026-04-06 04:36:45.902634 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-06 04:36:45.902646 | orchestrator | Monday 06 April 2026 04:36:29 +0000 (0:00:02.077) 0:00:13.764 ********** 2026-04-06 04:36:45.902659 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:36:45.902671 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:36:45.902708 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:36:45.902721 | orchestrator | 2026-04-06 04:36:45.902734 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-06 04:36:45.902747 | orchestrator | Monday 06 April 2026 04:36:31 +0000 (0:00:01.667) 0:00:15.431 ********** 2026-04-06 04:36:45.902763 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:36:45.902782 | orchestrator | 2026-04-06 04:36:45.902801 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-06 04:36:45.902818 | orchestrator | Monday 06 April 2026 04:36:32 +0000 (0:00:01.662) 0:00:17.093 ********** 2026-04-06 04:36:45.902837 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:36:45.902857 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:36:45.902876 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:36:45.902895 | orchestrator | 2026-04-06 04:36:45.902914 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-06 04:36:45.902926 | orchestrator | Monday 06 April 2026 04:36:34 +0000 (0:00:01.679) 0:00:18.773 ********** 2026-04-06 04:36:45.902937 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-06 04:36:45.902948 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-06 04:36:45.902959 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-06 04:36:45.902969 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-06 04:36:45.902980 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-06 04:36:45.903007 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-06 04:36:45.903019 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-06 04:36:45.903030 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-06 04:36:45.903041 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-06 04:36:45.903052 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-06 04:36:45.903063 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-06 04:36:45.903073 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-06 04:36:45.903084 | orchestrator | 2026-04-06 04:36:45.903095 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-06 04:36:45.903106 | orchestrator | Monday 06 April 2026 04:36:38 +0000 (0:00:04.302) 0:00:23.076 ********** 2026-04-06 04:36:45.903117 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-06 04:36:45.903128 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-06 04:36:45.903139 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-06 04:36:45.903150 | orchestrator | 2026-04-06 04:36:45.903160 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-06 04:36:45.903225 | orchestrator | Monday 06 April 2026 04:36:40 +0000 (0:00:01.672) 0:00:24.748 ********** 2026-04-06 04:36:45.903240 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-06 04:36:45.903251 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-06 04:36:45.903262 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-06 04:36:45.903272 | orchestrator | 2026-04-06 04:36:45.903283 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-06 04:36:45.903294 | orchestrator | Monday 06 April 2026 04:36:42 +0000 (0:00:02.201) 0:00:26.949 ********** 2026-04-06 04:36:45.903304 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-06 04:36:45.903315 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:36:45.903326 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-06 04:36:45.903336 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:36:45.903347 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-06 04:36:45.903373 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:36:45.903384 | orchestrator | 2026-04-06 04:36:45.903395 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-06 04:36:45.903406 | orchestrator | Monday 06 April 2026 04:36:44 +0000 (0:00:02.001) 0:00:28.950 ********** 2026-04-06 04:36:45.903420 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 04:36:45.903443 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 04:36:45.903456 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 04:36:45.903467 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:36:45.903479 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:36:45.903498 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:36:57.578386 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:36:57.578498 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:36:57.578530 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:36:57.578543 | orchestrator | 2026-04-06 04:36:57.578555 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-06 04:36:57.578567 | orchestrator | Monday 06 April 2026 04:36:47 +0000 (0:00:02.663) 0:00:31.613 ********** 2026-04-06 04:36:57.578577 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:36:57.578588 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:36:57.578598 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:36:57.578607 | orchestrator | 2026-04-06 04:36:57.578617 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-06 04:36:57.578627 | orchestrator | Monday 06 April 2026 04:36:49 +0000 (0:00:02.219) 0:00:33.833 ********** 2026-04-06 04:36:57.578637 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-04-06 04:36:57.578648 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-04-06 04:36:57.578658 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-04-06 04:36:57.578668 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-04-06 04:36:57.578678 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-04-06 04:36:57.578688 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-04-06 04:36:57.578697 | orchestrator | 2026-04-06 04:36:57.578707 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-06 04:36:57.578717 | orchestrator | Monday 06 April 2026 04:36:52 +0000 (0:00:02.669) 0:00:36.503 ********** 2026-04-06 04:36:57.578726 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:36:57.578736 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:36:57.578746 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:36:57.578756 | orchestrator | 2026-04-06 04:36:57.578765 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-06 04:36:57.578775 | orchestrator | Monday 06 April 2026 04:36:54 +0000 (0:00:01.962) 0:00:38.465 ********** 2026-04-06 04:36:57.578785 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:36:57.578795 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:36:57.578804 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:36:57.578814 | orchestrator | 2026-04-06 04:36:57.578824 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-06 04:36:57.578853 | orchestrator | Monday 06 April 2026 04:36:56 +0000 (0:00:02.443) 0:00:40.909 ********** 2026-04-06 04:36:57.578865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 04:36:57.578895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:36:57.578907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:36:57.578920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 04:36:57.578932 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:36:57.578949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 04:36:57.578961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:36:57.578980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:36:57.578993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 04:36:57.579005 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:36:57.579024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 04:37:01.651800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:37:01.651922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:01.651942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 04:37:01.651976 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:37:01.651991 | orchestrator | 2026-04-06 04:37:01.652003 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-06 04:37:01.652015 | orchestrator | Monday 06 April 2026 04:36:58 +0000 (0:00:02.103) 0:00:43.012 ********** 2026-04-06 04:37:01.652027 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:01.652039 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:01.652051 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:01.652082 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:01.652100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:01.652112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:01.652131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:01.652143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 04:37:01.652156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 04:37:01.652176 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:15.333920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:15.334114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519', '__omit_place_holder__e3adc2521f11f9e27a1facb5a855ea9445d16519'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-06 04:37:15.334155 | orchestrator | 2026-04-06 04:37:15.334169 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-06 04:37:15.334182 | orchestrator | Monday 06 April 2026 04:37:02 +0000 (0:00:04.100) 0:00:47.112 ********** 2026-04-06 04:37:15.334270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:15.334286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:15.334297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:15.334309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:15.334349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:15.334362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:15.334382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:37:15.334394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:37:15.334406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:37:15.334417 | orchestrator | 2026-04-06 04:37:15.334429 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-06 04:37:15.334440 | orchestrator | Monday 06 April 2026 04:37:07 +0000 (0:00:04.556) 0:00:51.668 ********** 2026-04-06 04:37:15.334451 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-06 04:37:15.334465 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-06 04:37:15.334478 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-06 04:37:15.334491 | orchestrator | 2026-04-06 04:37:15.334505 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-06 04:37:15.334517 | orchestrator | Monday 06 April 2026 04:37:10 +0000 (0:00:03.033) 0:00:54.701 ********** 2026-04-06 04:37:15.334530 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-06 04:37:15.334543 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-06 04:37:15.334556 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-06 04:37:15.334568 | orchestrator | 2026-04-06 04:37:15.334581 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-06 04:37:15.334593 | orchestrator | Monday 06 April 2026 04:37:14 +0000 (0:00:04.340) 0:00:59.042 ********** 2026-04-06 04:37:15.334611 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:37:15.334632 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:37:15.334659 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:37:36.287489 | orchestrator | 2026-04-06 04:37:36.287623 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-06 04:37:36.287669 | orchestrator | Monday 06 April 2026 04:37:16 +0000 (0:00:01.660) 0:01:00.702 ********** 2026-04-06 04:37:36.287683 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-06 04:37:36.287695 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-06 04:37:36.287722 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-06 04:37:36.287734 | orchestrator | 2026-04-06 04:37:36.287745 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-06 04:37:36.287756 | orchestrator | Monday 06 April 2026 04:37:19 +0000 (0:00:03.195) 0:01:03.898 ********** 2026-04-06 04:37:36.287767 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-06 04:37:36.287779 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-06 04:37:36.287790 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-06 04:37:36.287800 | orchestrator | 2026-04-06 04:37:36.287811 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-06 04:37:36.287822 | orchestrator | Monday 06 April 2026 04:37:22 +0000 (0:00:02.991) 0:01:06.890 ********** 2026-04-06 04:37:36.287833 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:37:36.287843 | orchestrator | 2026-04-06 04:37:36.287854 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-06 04:37:36.287865 | orchestrator | Monday 06 April 2026 04:37:24 +0000 (0:00:01.775) 0:01:08.665 ********** 2026-04-06 04:37:36.287876 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-04-06 04:37:36.287887 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-04-06 04:37:36.287898 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-04-06 04:37:36.287909 | orchestrator | 2026-04-06 04:37:36.287920 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-06 04:37:36.287930 | orchestrator | Monday 06 April 2026 04:37:27 +0000 (0:00:02.608) 0:01:11.274 ********** 2026-04-06 04:37:36.287941 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-06 04:37:36.287952 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-06 04:37:36.287962 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-06 04:37:36.287973 | orchestrator | 2026-04-06 04:37:36.287983 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-06 04:37:36.287995 | orchestrator | Monday 06 April 2026 04:37:29 +0000 (0:00:02.897) 0:01:14.171 ********** 2026-04-06 04:37:36.288007 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:37:36.288021 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:37:36.288034 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:37:36.288046 | orchestrator | 2026-04-06 04:37:36.288059 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-06 04:37:36.288071 | orchestrator | Monday 06 April 2026 04:37:31 +0000 (0:00:01.400) 0:01:15.571 ********** 2026-04-06 04:37:36.288084 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:37:36.288096 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:37:36.288108 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:37:36.288120 | orchestrator | 2026-04-06 04:37:36.288133 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-06 04:37:36.288146 | orchestrator | Monday 06 April 2026 04:37:33 +0000 (0:00:01.801) 0:01:17.373 ********** 2026-04-06 04:37:36.288162 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:36.288188 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:36.288254 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:36.288271 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:36.288284 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:36.288297 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:36.288311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:37:36.288334 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:37:36.288355 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:37:39.725808 | orchestrator | 2026-04-06 04:37:39.725911 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-06 04:37:39.725928 | orchestrator | Monday 06 April 2026 04:37:37 +0000 (0:00:04.275) 0:01:21.649 ********** 2026-04-06 04:37:39.725961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 04:37:39.725978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:37:39.725990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:39.726003 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:37:39.726082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 04:37:39.726118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:37:39.726130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:39.726141 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:37:39.726171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 04:37:39.726190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:37:39.726237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:39.726249 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:37:39.726260 | orchestrator | 2026-04-06 04:37:39.726271 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-06 04:37:39.726282 | orchestrator | Monday 06 April 2026 04:37:39 +0000 (0:00:01.904) 0:01:23.554 ********** 2026-04-06 04:37:39.726294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 04:37:39.726313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:37:39.726325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:39.726336 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:37:39.726358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 04:37:50.530278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:37:50.530426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:50.530447 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:37:50.530463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 04:37:50.530505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:37:50.530518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:50.530529 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:37:50.530541 | orchestrator | 2026-04-06 04:37:50.530553 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-06 04:37:50.530565 | orchestrator | Monday 06 April 2026 04:37:40 +0000 (0:00:01.658) 0:01:25.212 ********** 2026-04-06 04:37:50.530576 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-06 04:37:50.530588 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-06 04:37:50.531085 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-06 04:37:50.531119 | orchestrator | 2026-04-06 04:37:50.531131 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-06 04:37:50.531143 | orchestrator | Monday 06 April 2026 04:37:43 +0000 (0:00:02.744) 0:01:27.956 ********** 2026-04-06 04:37:50.531154 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-06 04:37:50.531170 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-06 04:37:50.531182 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-06 04:37:50.531193 | orchestrator | 2026-04-06 04:37:50.531253 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-06 04:37:50.531265 | orchestrator | Monday 06 April 2026 04:37:46 +0000 (0:00:02.488) 0:01:30.445 ********** 2026-04-06 04:37:50.531276 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 04:37:50.531287 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 04:37:50.531298 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 04:37:50.531309 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 04:37:50.531320 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:37:50.531331 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 04:37:50.531342 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:37:50.531353 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 04:37:50.531379 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:37:50.531390 | orchestrator | 2026-04-06 04:37:50.531401 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-06 04:37:50.531413 | orchestrator | Monday 06 April 2026 04:37:48 +0000 (0:00:02.322) 0:01:32.767 ********** 2026-04-06 04:37:50.531425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:50.531438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:50.531449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 04:37:50.531461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:50.531487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:54.750334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:37:54.750463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:37:54.750481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:37:54.750493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:37:54.750505 | orchestrator | 2026-04-06 04:37:54.750518 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-06 04:37:54.750530 | orchestrator | Monday 06 April 2026 04:37:52 +0000 (0:00:04.039) 0:01:36.806 ********** 2026-04-06 04:37:54.750543 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 04:37:54.750556 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:37:54.750567 | orchestrator | } 2026-04-06 04:37:54.750578 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 04:37:54.750589 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:37:54.750600 | orchestrator | } 2026-04-06 04:37:54.750611 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 04:37:54.750622 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:37:54.750632 | orchestrator | } 2026-04-06 04:37:54.750643 | orchestrator | 2026-04-06 04:37:54.750655 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 04:37:54.750666 | orchestrator | Monday 06 April 2026 04:37:54 +0000 (0:00:01.637) 0:01:38.444 ********** 2026-04-06 04:37:54.750677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 04:37:54.750722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:37:54.750743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:54.750755 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:37:54.750767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 04:37:54.750779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:37:54.750790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:37:54.750801 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:37:54.750812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 04:37:54.750829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:37:54.750859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:38:02.291466 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:38:02.291581 | orchestrator | 2026-04-06 04:38:02.291597 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-06 04:38:02.291610 | orchestrator | Monday 06 April 2026 04:37:56 +0000 (0:00:01.927) 0:01:40.371 ********** 2026-04-06 04:38:02.291622 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:38:02.291633 | orchestrator | 2026-04-06 04:38:02.291644 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-06 04:38:02.291655 | orchestrator | Monday 06 April 2026 04:37:58 +0000 (0:00:02.005) 0:01:42.377 ********** 2026-04-06 04:38:02.291671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:38:02.291689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 04:38:02.291703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:02.291717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 04:38:02.291786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:38:02.291802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 04:38:02.291813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:02.291825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 04:38:02.291837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:38:02.291866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 04:38:02.291885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:04.281516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 04:38:04.281645 | orchestrator | 2026-04-06 04:38:04.281673 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-06 04:38:04.281694 | orchestrator | Monday 06 April 2026 04:38:03 +0000 (0:00:05.314) 0:01:47.691 ********** 2026-04-06 04:38:04.281717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:38:04.281744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 04:38:04.281795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:04.281837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 04:38:04.281860 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:38:04.281910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:38:04.281936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 04:38:04.281957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:04.281976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 04:38:04.282011 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:38:04.282114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:38:04.282138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 04:38:04.282173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:17.699155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 04:38:17.699305 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:38:17.699325 | orchestrator | 2026-04-06 04:38:17.699339 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-06 04:38:17.699352 | orchestrator | Monday 06 April 2026 04:38:05 +0000 (0:00:01.964) 0:01:49.656 ********** 2026-04-06 04:38:17.699364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:17.699379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:17.699412 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:38:17.699424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:17.699435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:17.699446 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:38:17.699458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:17.699469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:17.699480 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:38:17.699490 | orchestrator | 2026-04-06 04:38:17.699502 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-06 04:38:17.699513 | orchestrator | Monday 06 April 2026 04:38:07 +0000 (0:00:02.023) 0:01:51.679 ********** 2026-04-06 04:38:17.699524 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:38:17.699535 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:38:17.699546 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:38:17.699557 | orchestrator | 2026-04-06 04:38:17.699574 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-06 04:38:17.699585 | orchestrator | Monday 06 April 2026 04:38:09 +0000 (0:00:02.243) 0:01:53.922 ********** 2026-04-06 04:38:17.699596 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:38:17.699607 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:38:17.699620 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:38:17.699634 | orchestrator | 2026-04-06 04:38:17.699647 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-06 04:38:17.699659 | orchestrator | Monday 06 April 2026 04:38:12 +0000 (0:00:02.899) 0:01:56.822 ********** 2026-04-06 04:38:17.699672 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:38:17.699685 | orchestrator | 2026-04-06 04:38:17.699697 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-06 04:38:17.699709 | orchestrator | Monday 06 April 2026 04:38:14 +0000 (0:00:01.688) 0:01:58.511 ********** 2026-04-06 04:38:17.699750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:38:17.699767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:17.699791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:38:17.699806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:38:17.699826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:38:17.699846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:19.699798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:19.699931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:38:19.699948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:38:19.699961 | orchestrator | 2026-04-06 04:38:19.699974 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-06 04:38:19.699987 | orchestrator | Monday 06 April 2026 04:38:18 +0000 (0:00:04.653) 0:02:03.165 ********** 2026-04-06 04:38:19.700016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:38:19.700031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:19.700062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:38:19.700082 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:38:19.700096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:38:19.700109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:19.700125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:38:19.700137 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:38:19.700149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:38:19.700179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 04:38:36.160370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:38:36.160511 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:38:36.160532 | orchestrator | 2026-04-06 04:38:36.160545 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-06 04:38:36.160558 | orchestrator | Monday 06 April 2026 04:38:20 +0000 (0:00:01.884) 0:02:05.049 ********** 2026-04-06 04:38:36.160570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:36.160585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:36.160598 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:38:36.160609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:36.160631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:36.160643 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:38:36.160654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:36.160666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:38:36.160677 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:38:36.160688 | orchestrator | 2026-04-06 04:38:36.160699 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-06 04:38:36.160710 | orchestrator | Monday 06 April 2026 04:38:22 +0000 (0:00:01.684) 0:02:06.733 ********** 2026-04-06 04:38:36.160721 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:38:36.160759 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:38:36.160770 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:38:36.160781 | orchestrator | 2026-04-06 04:38:36.160792 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-06 04:38:36.160804 | orchestrator | Monday 06 April 2026 04:38:24 +0000 (0:00:02.293) 0:02:09.027 ********** 2026-04-06 04:38:36.160815 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:38:36.160826 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:38:36.160836 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:38:36.160847 | orchestrator | 2026-04-06 04:38:36.160858 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-06 04:38:36.160869 | orchestrator | Monday 06 April 2026 04:38:27 +0000 (0:00:02.880) 0:02:11.907 ********** 2026-04-06 04:38:36.160880 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:38:36.160891 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:38:36.160902 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:38:36.160912 | orchestrator | 2026-04-06 04:38:36.160923 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-06 04:38:36.160934 | orchestrator | Monday 06 April 2026 04:38:29 +0000 (0:00:01.715) 0:02:13.623 ********** 2026-04-06 04:38:36.160945 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:38:36.160956 | orchestrator | 2026-04-06 04:38:36.160967 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-06 04:38:36.160977 | orchestrator | Monday 06 April 2026 04:38:30 +0000 (0:00:01.462) 0:02:15.085 ********** 2026-04-06 04:38:36.161018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-06 04:38:36.161037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-06 04:38:36.161055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-06 04:38:36.161074 | orchestrator | 2026-04-06 04:38:36.161086 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-06 04:38:36.161098 | orchestrator | Monday 06 April 2026 04:38:34 +0000 (0:00:03.816) 0:02:18.901 ********** 2026-04-06 04:38:36.161109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-06 04:38:36.161121 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:38:36.161132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-06 04:38:36.161143 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:38:36.161162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-06 04:38:48.696722 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:38:48.696820 | orchestrator | 2026-04-06 04:38:48.696830 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-06 04:38:48.696839 | orchestrator | Monday 06 April 2026 04:38:37 +0000 (0:00:02.647) 0:02:21.549 ********** 2026-04-06 04:38:48.696849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-06 04:38:48.696859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-06 04:38:48.696890 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:38:48.696911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-06 04:38:48.696918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-06 04:38:48.696925 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:38:48.696932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-06 04:38:48.696939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-06 04:38:48.696946 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:38:48.696953 | orchestrator | 2026-04-06 04:38:48.696960 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-06 04:38:48.696967 | orchestrator | Monday 06 April 2026 04:38:39 +0000 (0:00:02.582) 0:02:24.132 ********** 2026-04-06 04:38:48.696974 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:38:48.696981 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:38:48.696988 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:38:48.696994 | orchestrator | 2026-04-06 04:38:48.697001 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-06 04:38:48.697008 | orchestrator | Monday 06 April 2026 04:38:41 +0000 (0:00:01.699) 0:02:25.832 ********** 2026-04-06 04:38:48.697015 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:38:48.697022 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:38:48.697029 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:38:48.697036 | orchestrator | 2026-04-06 04:38:48.697043 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-06 04:38:48.697050 | orchestrator | Monday 06 April 2026 04:38:43 +0000 (0:00:02.123) 0:02:27.955 ********** 2026-04-06 04:38:48.697057 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:38:48.697064 | orchestrator | 2026-04-06 04:38:48.697071 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-06 04:38:48.697078 | orchestrator | Monday 06 April 2026 04:38:45 +0000 (0:00:01.506) 0:02:29.462 ********** 2026-04-06 04:38:48.697102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:38:48.697118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:38:48.697126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 04:38:48.697133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 04:38:48.697142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:38:48.697153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:38:50.546662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 04:38:50.546762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 04:38:50.546779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:38:50.546793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:38:50.546804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 04:38:50.546852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 04:38:50.546864 | orchestrator | 2026-04-06 04:38:50.546876 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-06 04:38:50.546893 | orchestrator | Monday 06 April 2026 04:38:50 +0000 (0:00:04.843) 0:02:34.305 ********** 2026-04-06 04:38:50.546905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:38:50.546917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:38:50.546927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 04:38:50.546938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 04:38:50.546956 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:38:50.546991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:39:00.531432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:39:00.531545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 04:39:00.531562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 04:39:00.531576 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:39:00.531591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:39:00.531627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:39:00.531670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 04:39:00.531683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 04:39:00.531693 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:39:00.531703 | orchestrator | 2026-04-06 04:39:00.531714 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-06 04:39:00.531725 | orchestrator | Monday 06 April 2026 04:38:51 +0000 (0:00:01.769) 0:02:36.074 ********** 2026-04-06 04:39:00.531736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:00.531748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:00.531767 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:39:00.531777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:00.531787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:00.531797 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:39:00.531807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:00.531817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:00.531827 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:39:00.531837 | orchestrator | 2026-04-06 04:39:00.531847 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-06 04:39:00.531857 | orchestrator | Monday 06 April 2026 04:38:53 +0000 (0:00:01.703) 0:02:37.778 ********** 2026-04-06 04:39:00.531867 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:39:00.531878 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:39:00.531887 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:39:00.531897 | orchestrator | 2026-04-06 04:39:00.531907 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-06 04:39:00.531917 | orchestrator | Monday 06 April 2026 04:38:55 +0000 (0:00:02.457) 0:02:40.236 ********** 2026-04-06 04:39:00.531926 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:39:00.531936 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:39:00.531946 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:39:00.531955 | orchestrator | 2026-04-06 04:39:00.531967 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-06 04:39:00.531980 | orchestrator | Monday 06 April 2026 04:38:58 +0000 (0:00:02.881) 0:02:43.117 ********** 2026-04-06 04:39:00.531992 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:39:00.532003 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:39:00.532014 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:39:00.532026 | orchestrator | 2026-04-06 04:39:00.532038 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-06 04:39:00.532054 | orchestrator | Monday 06 April 2026 04:39:00 +0000 (0:00:01.428) 0:02:44.545 ********** 2026-04-06 04:39:00.532066 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:39:00.532077 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:39:00.532095 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:39:07.145446 | orchestrator | 2026-04-06 04:39:07.145576 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-06 04:39:07.145606 | orchestrator | Monday 06 April 2026 04:39:01 +0000 (0:00:01.438) 0:02:45.984 ********** 2026-04-06 04:39:07.145628 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:39:07.145648 | orchestrator | 2026-04-06 04:39:07.145666 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-06 04:39:07.145684 | orchestrator | Monday 06 April 2026 04:39:03 +0000 (0:00:01.835) 0:02:47.820 ********** 2026-04-06 04:39:07.145710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:39:07.145768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:39:07.145795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 04:39:07.145818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 04:39:07.145886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 04:39:07.145914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 04:39:07.145948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 04:39:07.145962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 04:39:07.145977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 04:39:07.145991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 04:39:07.146084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:39:08.940718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:39:08.940825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 04:39:08.940837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 04:39:08.940846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:39:08.940856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 04:39:08.940890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 04:39:08.940899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 04:39:08.940912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 04:39:08.940919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:39:08.940926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 04:39:08.940932 | orchestrator | 2026-04-06 04:39:08.940940 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-06 04:39:08.940948 | orchestrator | Monday 06 April 2026 04:39:08 +0000 (0:00:04.918) 0:02:52.738 ********** 2026-04-06 04:39:08.940955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:39:08.940973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 04:39:09.451929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 04:39:09.452025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 04:39:09.452036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 04:39:09.452044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:39:09.452051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 04:39:09.452058 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:39:09.452083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:39:09.452112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 04:39:09.452120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 04:39:09.452160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 04:39:09.452169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 04:39:09.452176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:39:09.452192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 04:39:09.452199 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:39:09.452213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:39:25.281511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 04:39:25.281650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 04:39:25.281692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 04:39:25.281705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 04:39:25.281753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:39:25.281766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-06 04:39:25.281778 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:39:25.281792 | orchestrator | 2026-04-06 04:39:25.281805 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-06 04:39:25.281817 | orchestrator | Monday 06 April 2026 04:39:10 +0000 (0:00:02.220) 0:02:54.959 ********** 2026-04-06 04:39:25.281848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:25.281863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:25.281876 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:39:25.281888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:25.281899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:25.281910 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:39:25.281921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:25.281932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:39:25.281943 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:39:25.281954 | orchestrator | 2026-04-06 04:39:25.281964 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-06 04:39:25.281975 | orchestrator | Monday 06 April 2026 04:39:12 +0000 (0:00:01.871) 0:02:56.830 ********** 2026-04-06 04:39:25.281986 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:39:25.282006 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:39:25.282100 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:39:25.282122 | orchestrator | 2026-04-06 04:39:25.282143 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-06 04:39:25.282163 | orchestrator | Monday 06 April 2026 04:39:14 +0000 (0:00:02.291) 0:02:59.122 ********** 2026-04-06 04:39:25.282179 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:39:25.282191 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:39:25.282205 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:39:25.282217 | orchestrator | 2026-04-06 04:39:25.282254 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-06 04:39:25.282272 | orchestrator | Monday 06 April 2026 04:39:17 +0000 (0:00:02.864) 0:03:01.987 ********** 2026-04-06 04:39:25.282285 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:39:25.282298 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:39:25.282311 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:39:25.282323 | orchestrator | 2026-04-06 04:39:25.282336 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-06 04:39:25.282350 | orchestrator | Monday 06 April 2026 04:39:19 +0000 (0:00:01.665) 0:03:03.652 ********** 2026-04-06 04:39:25.282363 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:39:25.282376 | orchestrator | 2026-04-06 04:39:25.282387 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-06 04:39:25.282405 | orchestrator | Monday 06 April 2026 04:39:20 +0000 (0:00:01.564) 0:03:05.216 ********** 2026-04-06 04:39:25.282433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 04:39:25.679698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 04:39:25.679859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 04:39:25.679903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 04:39:25.679930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 04:39:25.679952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 04:39:30.043059 | orchestrator | 2026-04-06 04:39:30.043148 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-06 04:39:30.043159 | orchestrator | Monday 06 April 2026 04:39:26 +0000 (0:00:05.838) 0:03:11.055 ********** 2026-04-06 04:39:30.043186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 04:39:30.043196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 04:39:30.043228 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:39:30.043305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 04:39:30.043314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 04:39:30.043326 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:39:30.043342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 04:39:47.313682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-06 04:39:47.313826 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:39:47.313845 | orchestrator | 2026-04-06 04:39:47.313858 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-06 04:39:47.313870 | orchestrator | Monday 06 April 2026 04:39:31 +0000 (0:00:04.423) 0:03:15.478 ********** 2026-04-06 04:39:47.313954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 04:39:47.313972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 04:39:47.313984 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:39:47.313996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 04:39:47.314107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 04:39:47.314123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 04:39:47.314135 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:39:47.314146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-06 04:39:47.314157 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:39:47.314168 | orchestrator | 2026-04-06 04:39:47.314191 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-06 04:39:47.314202 | orchestrator | Monday 06 April 2026 04:39:35 +0000 (0:00:04.572) 0:03:20.051 ********** 2026-04-06 04:39:47.314215 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:39:47.314229 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:39:47.314293 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:39:47.314306 | orchestrator | 2026-04-06 04:39:47.314318 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-06 04:39:47.314331 | orchestrator | Monday 06 April 2026 04:39:38 +0000 (0:00:02.483) 0:03:22.535 ********** 2026-04-06 04:39:47.314344 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:39:47.314357 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:39:47.314369 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:39:47.314382 | orchestrator | 2026-04-06 04:39:47.314394 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-06 04:39:47.314407 | orchestrator | Monday 06 April 2026 04:39:41 +0000 (0:00:02.795) 0:03:25.330 ********** 2026-04-06 04:39:47.314420 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:39:47.314433 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:39:47.314445 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:39:47.314457 | orchestrator | 2026-04-06 04:39:47.314470 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-06 04:39:47.314482 | orchestrator | Monday 06 April 2026 04:39:42 +0000 (0:00:01.353) 0:03:26.684 ********** 2026-04-06 04:39:47.314495 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:39:47.314507 | orchestrator | 2026-04-06 04:39:47.314519 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-06 04:39:47.314531 | orchestrator | Monday 06 April 2026 04:39:44 +0000 (0:00:01.929) 0:03:28.613 ********** 2026-04-06 04:39:47.314546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:39:47.314574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:40:04.507320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:40:04.507448 | orchestrator | 2026-04-06 04:40:04.507463 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-06 04:40:04.507474 | orchestrator | Monday 06 April 2026 04:39:48 +0000 (0:00:04.362) 0:03:32.976 ********** 2026-04-06 04:40:04.507485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:40:04.507495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:40:04.507505 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:04.507516 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:04.507526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:40:04.507537 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:04.507546 | orchestrator | 2026-04-06 04:40:04.507556 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-06 04:40:04.507566 | orchestrator | Monday 06 April 2026 04:39:50 +0000 (0:00:01.881) 0:03:34.858 ********** 2026-04-06 04:40:04.507576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:04.507589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:04.507600 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:04.507637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:04.507655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:04.507665 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:04.507675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:04.507685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:04.507695 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:04.507704 | orchestrator | 2026-04-06 04:40:04.507745 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-06 04:40:04.507755 | orchestrator | Monday 06 April 2026 04:39:52 +0000 (0:00:01.744) 0:03:36.602 ********** 2026-04-06 04:40:04.507765 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:40:04.507775 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:40:04.507785 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:40:04.507794 | orchestrator | 2026-04-06 04:40:04.507804 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-06 04:40:04.507814 | orchestrator | Monday 06 April 2026 04:39:54 +0000 (0:00:02.222) 0:03:38.825 ********** 2026-04-06 04:40:04.507825 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:40:04.507837 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:40:04.507848 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:40:04.507859 | orchestrator | 2026-04-06 04:40:04.507870 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-06 04:40:04.507881 | orchestrator | Monday 06 April 2026 04:39:57 +0000 (0:00:02.881) 0:03:41.707 ********** 2026-04-06 04:40:04.507893 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:04.507905 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:04.507917 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:04.507929 | orchestrator | 2026-04-06 04:40:04.507940 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-06 04:40:04.507951 | orchestrator | Monday 06 April 2026 04:39:58 +0000 (0:00:01.337) 0:03:43.045 ********** 2026-04-06 04:40:04.507962 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:40:04.507973 | orchestrator | 2026-04-06 04:40:04.507984 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-06 04:40:04.507996 | orchestrator | Monday 06 April 2026 04:40:00 +0000 (0:00:01.966) 0:03:45.011 ********** 2026-04-06 04:40:04.508026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 04:40:06.475802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 04:40:06.475907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 04:40:06.475937 | orchestrator | 2026-04-06 04:40:06.475947 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-06 04:40:06.475956 | orchestrator | Monday 06 April 2026 04:40:05 +0000 (0:00:05.111) 0:03:50.122 ********** 2026-04-06 04:40:06.475965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 04:40:06.475983 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:06.476005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 04:40:16.363822 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:16.363968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 04:40:16.364022 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:16.364044 | orchestrator | 2026-04-06 04:40:16.364073 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-06 04:40:16.364116 | orchestrator | Monday 06 April 2026 04:40:07 +0000 (0:00:01.825) 0:03:51.948 ********** 2026-04-06 04:40:16.364149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-06 04:40:16.364212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 04:40:16.364233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-06 04:40:16.364287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 04:40:16.364307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-06 04:40:16.364329 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:16.364375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-06 04:40:16.364396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 04:40:16.364415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-06 04:40:16.364432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 04:40:16.364466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-06 04:40:16.364478 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:16.364490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-06 04:40:16.364502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 04:40:16.364522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-06 04:40:16.364535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-06 04:40:16.364545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-06 04:40:16.364554 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:16.364564 | orchestrator | 2026-04-06 04:40:16.364575 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-06 04:40:16.364584 | orchestrator | Monday 06 April 2026 04:40:09 +0000 (0:00:01.944) 0:03:53.893 ********** 2026-04-06 04:40:16.364594 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:40:16.364605 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:40:16.364617 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:40:16.364634 | orchestrator | 2026-04-06 04:40:16.364650 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-06 04:40:16.364666 | orchestrator | Monday 06 April 2026 04:40:11 +0000 (0:00:02.126) 0:03:56.020 ********** 2026-04-06 04:40:16.364682 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:40:16.364698 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:40:16.364714 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:40:16.364729 | orchestrator | 2026-04-06 04:40:16.364745 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-06 04:40:16.364761 | orchestrator | Monday 06 April 2026 04:40:14 +0000 (0:00:02.841) 0:03:58.861 ********** 2026-04-06 04:40:16.364777 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:16.364793 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:16.364810 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:16.364826 | orchestrator | 2026-04-06 04:40:16.364841 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-06 04:40:16.364858 | orchestrator | Monday 06 April 2026 04:40:16 +0000 (0:00:01.635) 0:04:00.496 ********** 2026-04-06 04:40:16.364887 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:24.631079 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:24.631172 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:24.631181 | orchestrator | 2026-04-06 04:40:24.631190 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-06 04:40:24.631198 | orchestrator | Monday 06 April 2026 04:40:17 +0000 (0:00:01.417) 0:04:01.914 ********** 2026-04-06 04:40:24.631228 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:40:24.631236 | orchestrator | 2026-04-06 04:40:24.631243 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-06 04:40:24.631291 | orchestrator | Monday 06 April 2026 04:40:19 +0000 (0:00:01.973) 0:04:03.888 ********** 2026-04-06 04:40:24.631302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 04:40:24.631313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 04:40:24.631336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 04:40:24.631345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 04:40:24.631368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 04:40:24.631382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 04:40:24.631390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 04:40:24.631400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 04:40:24.631407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 04:40:24.631413 | orchestrator | 2026-04-06 04:40:24.631419 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-06 04:40:24.631427 | orchestrator | Monday 06 April 2026 04:40:24 +0000 (0:00:04.698) 0:04:08.587 ********** 2026-04-06 04:40:24.631440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 04:40:27.841477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 04:40:27.841609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 04:40:27.841628 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:27.841664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 04:40:27.841689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 04:40:27.841740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 04:40:27.841763 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:27.841817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 04:40:27.841848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 04:40:27.841876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 04:40:27.841896 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:27.841914 | orchestrator | 2026-04-06 04:40:27.841932 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-06 04:40:27.841954 | orchestrator | Monday 06 April 2026 04:40:25 +0000 (0:00:01.640) 0:04:10.227 ********** 2026-04-06 04:40:27.841974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-06 04:40:27.841996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-06 04:40:27.842162 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:27.842191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-06 04:40:27.842211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-06 04:40:27.842232 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:27.842284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-06 04:40:27.842305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-06 04:40:27.842327 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:27.842346 | orchestrator | 2026-04-06 04:40:27.842366 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-06 04:40:27.842392 | orchestrator | Monday 06 April 2026 04:40:27 +0000 (0:00:01.876) 0:04:12.103 ********** 2026-04-06 04:40:41.573586 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:40:41.573725 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:40:41.573750 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:40:41.573769 | orchestrator | 2026-04-06 04:40:41.573789 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-06 04:40:41.573808 | orchestrator | Monday 06 April 2026 04:40:30 +0000 (0:00:02.240) 0:04:14.344 ********** 2026-04-06 04:40:41.573827 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:40:41.573846 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:40:41.573863 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:40:41.573881 | orchestrator | 2026-04-06 04:40:41.573900 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-06 04:40:41.573918 | orchestrator | Monday 06 April 2026 04:40:33 +0000 (0:00:02.947) 0:04:17.291 ********** 2026-04-06 04:40:41.573936 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:41.573956 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:41.573974 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:41.573993 | orchestrator | 2026-04-06 04:40:41.574012 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-06 04:40:41.574098 | orchestrator | Monday 06 April 2026 04:40:34 +0000 (0:00:01.368) 0:04:18.659 ********** 2026-04-06 04:40:41.574121 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:40:41.574141 | orchestrator | 2026-04-06 04:40:41.574163 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-06 04:40:41.574185 | orchestrator | Monday 06 April 2026 04:40:36 +0000 (0:00:02.058) 0:04:20.718 ********** 2026-04-06 04:40:41.574237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:40:41.574341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:40:41.574370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:40:41.574421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:40:41.574445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:40:41.574489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:40:41.574510 | orchestrator | 2026-04-06 04:40:41.574531 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-06 04:40:41.574552 | orchestrator | Monday 06 April 2026 04:40:41 +0000 (0:00:04.726) 0:04:25.444 ********** 2026-04-06 04:40:41.574573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:40:41.574628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:40:56.146373 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:56.146496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:40:56.146519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:40:56.146560 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:56.146574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:40:56.146627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:40:56.146641 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:56.146652 | orchestrator | 2026-04-06 04:40:56.146664 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-06 04:40:56.146676 | orchestrator | Monday 06 April 2026 04:40:43 +0000 (0:00:01.979) 0:04:27.424 ********** 2026-04-06 04:40:56.146705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:56.146720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:56.146733 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:40:56.146744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:56.146756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:56.146777 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:40:56.146788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:56.146799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:40:56.146810 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:40:56.146821 | orchestrator | 2026-04-06 04:40:56.146832 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-06 04:40:56.146848 | orchestrator | Monday 06 April 2026 04:40:45 +0000 (0:00:02.173) 0:04:29.597 ********** 2026-04-06 04:40:56.146860 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:40:56.146871 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:40:56.146882 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:40:56.146893 | orchestrator | 2026-04-06 04:40:56.146904 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-06 04:40:56.146914 | orchestrator | Monday 06 April 2026 04:40:47 +0000 (0:00:02.276) 0:04:31.873 ********** 2026-04-06 04:40:56.146925 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:40:56.146936 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:40:56.146946 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:40:56.146957 | orchestrator | 2026-04-06 04:40:56.146968 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-06 04:40:56.146979 | orchestrator | Monday 06 April 2026 04:40:50 +0000 (0:00:02.935) 0:04:34.809 ********** 2026-04-06 04:40:56.146989 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:40:56.147000 | orchestrator | 2026-04-06 04:40:56.147011 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-06 04:40:56.147022 | orchestrator | Monday 06 April 2026 04:40:52 +0000 (0:00:02.077) 0:04:36.887 ********** 2026-04-06 04:40:56.147034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:40:56.147048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:40:56.147082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 04:40:57.860930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 04:40:57.861056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:40:57.861075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:40:57.861090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:40:57.861103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 04:40:57.861158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 04:40:57.861177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:40:57.861189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 04:40:57.861201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 04:40:57.861213 | orchestrator | 2026-04-06 04:40:57.861227 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-06 04:40:57.861240 | orchestrator | Monday 06 April 2026 04:40:57 +0000 (0:00:04.884) 0:04:41.771 ********** 2026-04-06 04:40:57.861253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:40:57.861335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:41:00.073976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 04:41:00.074162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 04:41:00.074181 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:00.074197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:41:00.074212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:41:00.074226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 04:41:00.074332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 04:41:00.074349 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:00.074367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:41:00.074379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:41:00.074391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 04:41:00.074402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 04:41:00.074422 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:00.074434 | orchestrator | 2026-04-06 04:41:00.074446 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-06 04:41:00.074459 | orchestrator | Monday 06 April 2026 04:40:59 +0000 (0:00:02.059) 0:04:43.831 ********** 2026-04-06 04:41:00.074471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:41:00.074486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:41:00.074500 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:00.074514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:41:00.074535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:41:16.190830 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:16.190950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:41:16.190969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:41:16.190985 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:16.190996 | orchestrator | 2026-04-06 04:41:16.191009 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-06 04:41:16.191021 | orchestrator | Monday 06 April 2026 04:41:01 +0000 (0:00:01.771) 0:04:45.602 ********** 2026-04-06 04:41:16.191032 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:41:16.191061 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:41:16.191073 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:41:16.191084 | orchestrator | 2026-04-06 04:41:16.191095 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-06 04:41:16.191106 | orchestrator | Monday 06 April 2026 04:41:03 +0000 (0:00:02.225) 0:04:47.828 ********** 2026-04-06 04:41:16.191117 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:41:16.191128 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:41:16.191139 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:41:16.191150 | orchestrator | 2026-04-06 04:41:16.191161 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-06 04:41:16.191172 | orchestrator | Monday 06 April 2026 04:41:06 +0000 (0:00:02.961) 0:04:50.789 ********** 2026-04-06 04:41:16.191183 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:41:16.191194 | orchestrator | 2026-04-06 04:41:16.191205 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-06 04:41:16.191216 | orchestrator | Monday 06 April 2026 04:41:09 +0000 (0:00:02.610) 0:04:53.400 ********** 2026-04-06 04:41:16.191227 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 04:41:16.191238 | orchestrator | 2026-04-06 04:41:16.191249 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-06 04:41:16.191326 | orchestrator | Monday 06 April 2026 04:41:13 +0000 (0:00:04.510) 0:04:57.910 ********** 2026-04-06 04:41:16.191344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:41:16.191381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 04:41:16.191397 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:16.191418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:41:16.191442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 04:41:16.191456 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:16.191479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:41:19.851430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 04:41:19.851556 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:19.851586 | orchestrator | 2026-04-06 04:41:19.851608 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-06 04:41:19.851629 | orchestrator | Monday 06 April 2026 04:41:17 +0000 (0:00:03.627) 0:05:01.538 ********** 2026-04-06 04:41:19.851680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:41:19.851707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 04:41:19.851729 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:19.851787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:41:19.851823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 04:41:19.851843 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:19.851865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:41:19.851900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-06 04:41:37.257614 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:37.257731 | orchestrator | 2026-04-06 04:41:37.257748 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-06 04:41:37.257761 | orchestrator | Monday 06 April 2026 04:41:20 +0000 (0:00:03.711) 0:05:05.250 ********** 2026-04-06 04:41:37.257791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 04:41:37.257834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 04:41:37.257848 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:37.257861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 04:41:37.257873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 04:41:37.257884 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:37.257896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 04:41:37.257907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-06 04:41:37.257919 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:37.257930 | orchestrator | 2026-04-06 04:41:37.257941 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-06 04:41:37.257953 | orchestrator | Monday 06 April 2026 04:41:24 +0000 (0:00:04.011) 0:05:09.261 ********** 2026-04-06 04:41:37.257972 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:41:37.258000 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:41:37.258012 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:41:37.258089 | orchestrator | 2026-04-06 04:41:37.258101 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-06 04:41:37.258112 | orchestrator | Monday 06 April 2026 04:41:28 +0000 (0:00:03.073) 0:05:12.334 ********** 2026-04-06 04:41:37.258129 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:37.258140 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:37.258152 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:37.258165 | orchestrator | 2026-04-06 04:41:37.258178 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-06 04:41:37.258190 | orchestrator | Monday 06 April 2026 04:41:30 +0000 (0:00:02.814) 0:05:15.148 ********** 2026-04-06 04:41:37.258203 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:37.258217 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:37.258229 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:37.258242 | orchestrator | 2026-04-06 04:41:37.258255 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-06 04:41:37.258290 | orchestrator | Monday 06 April 2026 04:41:32 +0000 (0:00:01.441) 0:05:16.590 ********** 2026-04-06 04:41:37.258303 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:41:37.258316 | orchestrator | 2026-04-06 04:41:37.258328 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-06 04:41:37.258341 | orchestrator | Monday 06 April 2026 04:41:34 +0000 (0:00:02.007) 0:05:18.597 ********** 2026-04-06 04:41:37.258355 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-06 04:41:37.258371 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-06 04:41:37.258385 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-06 04:41:37.258399 | orchestrator | 2026-04-06 04:41:37.258420 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-06 04:41:37.258433 | orchestrator | Monday 06 April 2026 04:41:37 +0000 (0:00:02.792) 0:05:21.389 ********** 2026-04-06 04:41:37.258455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-06 04:41:52.439127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-06 04:41:52.439267 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:52.439365 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:52.439378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-06 04:41:52.439392 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:52.439405 | orchestrator | 2026-04-06 04:41:52.439418 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-06 04:41:52.439432 | orchestrator | Monday 06 April 2026 04:41:38 +0000 (0:00:01.538) 0:05:22.928 ********** 2026-04-06 04:41:52.439448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-06 04:41:52.439462 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:52.439475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-06 04:41:52.439484 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:52.439496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-06 04:41:52.439535 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:52.439547 | orchestrator | 2026-04-06 04:41:52.439560 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-06 04:41:52.439572 | orchestrator | Monday 06 April 2026 04:41:40 +0000 (0:00:01.698) 0:05:24.626 ********** 2026-04-06 04:41:52.439585 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:52.439596 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:52.439609 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:52.439622 | orchestrator | 2026-04-06 04:41:52.439633 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-06 04:41:52.439642 | orchestrator | Monday 06 April 2026 04:41:41 +0000 (0:00:01.451) 0:05:26.078 ********** 2026-04-06 04:41:52.439650 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:52.439658 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:52.439667 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:52.439678 | orchestrator | 2026-04-06 04:41:52.439690 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-06 04:41:52.439702 | orchestrator | Monday 06 April 2026 04:41:44 +0000 (0:00:02.337) 0:05:28.416 ********** 2026-04-06 04:41:52.439714 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:52.439727 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:52.439740 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:41:52.439753 | orchestrator | 2026-04-06 04:41:52.439765 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-06 04:41:52.439778 | orchestrator | Monday 06 April 2026 04:41:45 +0000 (0:00:01.483) 0:05:29.900 ********** 2026-04-06 04:41:52.439791 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:41:52.439805 | orchestrator | 2026-04-06 04:41:52.439814 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-06 04:41:52.439823 | orchestrator | Monday 06 April 2026 04:41:47 +0000 (0:00:02.246) 0:05:32.147 ********** 2026-04-06 04:41:52.439861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:41:52.439877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.439892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-06 04:41:52.439920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:41:52.439987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-06 04:41:52.510440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.510557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.510603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:41:52.510635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-06 04:41:52.510649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:52.510684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-06 04:41:52.510705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:52.510718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.510730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.510748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 04:41:52.510768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:52.608124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-06 04:41:52.608254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:52.608322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:52.608338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.608367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-06 04:41:52.608399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 04:41:52.608422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.608464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:52.608477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-06 04:41:52.608490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:52.608508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.608530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-06 04:41:52.684450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:52.684608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:52.684625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:52.684639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 04:41:52.684652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.684692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.684733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:52.684768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 04:41:52.684793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:52.684814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:52.684844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 04:41:52.684881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-06 04:41:55.416121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:55.416264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:55.416380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.416446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 04:41:55.416463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:55.416503 | orchestrator | 2026-04-06 04:41:55.416517 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-06 04:41:55.416530 | orchestrator | Monday 06 April 2026 04:41:53 +0000 (0:00:06.096) 0:05:38.243 ********** 2026-04-06 04:41:55.416567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:41:55.416582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.416596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-06 04:41:55.416615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-06 04:41:55.416648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.522644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:55.522787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:41:55.522807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:55.522843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.522857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-06 04:41:55.522920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 04:41:55.522935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-06 04:41:55.522948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:55.522965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.522985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.523006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:55.618519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-06 04:41:55.618645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:55.618663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:55.618676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 04:41:55.618708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.618746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:55.618783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 04:41:55.618799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.618811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:55.618823 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:41:55.618837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-06 04:41:55.618871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:55.618884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.618904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 04:41:55.782758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:55.782911 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:41:55.782941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:41:55.783028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.783054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-06 04:41:55.783105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-06 04:41:55.783128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:41:55.783149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:55.783190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:41:55.783212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 04:41:55.783233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:41:55.783266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-06 04:42:11.055519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-06 04:42:11.055671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-06 04:42:11.055733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-06 04:42:11.055760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-06 04:42:11.055911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-06 04:42:11.055950 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:42:11.055974 | orchestrator | 2026-04-06 04:42:11.055998 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-06 04:42:11.056021 | orchestrator | Monday 06 April 2026 04:41:56 +0000 (0:00:02.983) 0:05:41.226 ********** 2026-04-06 04:42:11.056045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:11.056234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:11.056269 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:42:11.056325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:11.056351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:11.056392 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:42:11.056413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:11.056434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:11.056455 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:42:11.056475 | orchestrator | 2026-04-06 04:42:11.056496 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-06 04:42:11.056515 | orchestrator | Monday 06 April 2026 04:41:59 +0000 (0:00:02.855) 0:05:44.082 ********** 2026-04-06 04:42:11.056535 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:42:11.056557 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:42:11.056577 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:42:11.056598 | orchestrator | 2026-04-06 04:42:11.056619 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-06 04:42:11.056640 | orchestrator | Monday 06 April 2026 04:42:02 +0000 (0:00:02.260) 0:05:46.342 ********** 2026-04-06 04:42:11.056671 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:42:11.056693 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:42:11.056714 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:42:11.056735 | orchestrator | 2026-04-06 04:42:11.056755 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-06 04:42:11.056776 | orchestrator | Monday 06 April 2026 04:42:05 +0000 (0:00:02.960) 0:05:49.303 ********** 2026-04-06 04:42:11.056798 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:42:11.056819 | orchestrator | 2026-04-06 04:42:11.056840 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-06 04:42:11.056861 | orchestrator | Monday 06 April 2026 04:42:07 +0000 (0:00:02.318) 0:05:51.622 ********** 2026-04-06 04:42:11.056881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 04:42:11.056919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 04:42:27.491279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 04:42:27.491456 | orchestrator | 2026-04-06 04:42:27.491476 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-06 04:42:27.491489 | orchestrator | Monday 06 April 2026 04:42:12 +0000 (0:00:04.971) 0:05:56.594 ********** 2026-04-06 04:42:27.491519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 04:42:27.491534 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:42:27.491548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 04:42:27.491560 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:42:27.491616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 04:42:27.491630 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:42:27.491641 | orchestrator | 2026-04-06 04:42:27.491652 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-06 04:42:27.491664 | orchestrator | Monday 06 April 2026 04:42:14 +0000 (0:00:01.995) 0:05:58.590 ********** 2026-04-06 04:42:27.491676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:42:27.491690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:42:27.491710 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:42:27.491729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:42:27.491755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:42:27.491774 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:42:27.491790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:42:27.491810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:42:27.491828 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:42:27.491847 | orchestrator | 2026-04-06 04:42:27.491865 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-06 04:42:27.491883 | orchestrator | Monday 06 April 2026 04:42:15 +0000 (0:00:01.665) 0:06:00.256 ********** 2026-04-06 04:42:27.491903 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:42:27.491922 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:42:27.491941 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:42:27.491960 | orchestrator | 2026-04-06 04:42:27.491978 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-06 04:42:27.491998 | orchestrator | Monday 06 April 2026 04:42:18 +0000 (0:00:02.275) 0:06:02.531 ********** 2026-04-06 04:42:27.492014 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:42:27.492039 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:42:27.492051 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:42:27.492065 | orchestrator | 2026-04-06 04:42:27.492078 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-06 04:42:27.492091 | orchestrator | Monday 06 April 2026 04:42:21 +0000 (0:00:02.977) 0:06:05.508 ********** 2026-04-06 04:42:27.492104 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:42:27.492117 | orchestrator | 2026-04-06 04:42:27.492130 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-06 04:42:27.492142 | orchestrator | Monday 06 April 2026 04:42:23 +0000 (0:00:02.480) 0:06:07.989 ********** 2026-04-06 04:42:27.492171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:42:29.516091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:42:29.516209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:42:29.516248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:42:29.516264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:42:29.516352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:42:29.516374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:42:29.516387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:42:29.516407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:42:29.516420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:42:29.516441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:42:31.766710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:42:31.766816 | orchestrator | 2026-04-06 04:42:31.766832 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-06 04:42:31.766845 | orchestrator | Monday 06 April 2026 04:42:30 +0000 (0:00:06.957) 0:06:14.946 ********** 2026-04-06 04:42:31.766879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:42:31.766916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:42:31.766930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:42:31.766962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:42:31.766975 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:42:31.766994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:42:31.767007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:42:31.767027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:42:31.767039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:42:31.767051 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:42:31.767070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:42:49.544582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:42:49.544728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 04:42:49.544748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 04:42:49.544762 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:42:49.544776 | orchestrator | 2026-04-06 04:42:49.544788 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-06 04:42:49.544800 | orchestrator | Monday 06 April 2026 04:42:32 +0000 (0:00:02.210) 0:06:17.156 ********** 2026-04-06 04:42:49.544812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544863 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:42:49.544874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544955 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:42:49.544966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.544999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:42:49.545010 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:42:49.545021 | orchestrator | 2026-04-06 04:42:49.545032 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-06 04:42:49.545043 | orchestrator | Monday 06 April 2026 04:42:34 +0000 (0:00:02.025) 0:06:19.182 ********** 2026-04-06 04:42:49.545054 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:42:49.545066 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:42:49.545076 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:42:49.545088 | orchestrator | 2026-04-06 04:42:49.545100 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-06 04:42:49.545113 | orchestrator | Monday 06 April 2026 04:42:37 +0000 (0:00:02.175) 0:06:21.357 ********** 2026-04-06 04:42:49.545125 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:42:49.545137 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:42:49.545156 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:42:49.545176 | orchestrator | 2026-04-06 04:42:49.545196 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-06 04:42:49.545214 | orchestrator | Monday 06 April 2026 04:42:40 +0000 (0:00:03.308) 0:06:24.665 ********** 2026-04-06 04:42:49.545236 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:42:49.545257 | orchestrator | 2026-04-06 04:42:49.545280 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-06 04:42:49.545323 | orchestrator | Monday 06 April 2026 04:42:42 +0000 (0:00:02.446) 0:06:27.112 ********** 2026-04-06 04:42:49.545336 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-06 04:42:49.545351 | orchestrator | 2026-04-06 04:42:49.545363 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-06 04:42:49.545377 | orchestrator | Monday 06 April 2026 04:42:45 +0000 (0:00:02.437) 0:06:29.550 ********** 2026-04-06 04:42:49.545391 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-06 04:42:49.545417 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-06 04:42:49.545449 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-06 04:43:09.254015 | orchestrator | 2026-04-06 04:43:09.254228 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-06 04:43:09.254248 | orchestrator | Monday 06 April 2026 04:42:50 +0000 (0:00:05.359) 0:06:34.909 ********** 2026-04-06 04:43:09.254265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 04:43:09.254282 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:43:09.254321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 04:43:09.254333 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:43:09.254345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 04:43:09.254357 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:43:09.254368 | orchestrator | 2026-04-06 04:43:09.254380 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-06 04:43:09.254391 | orchestrator | Monday 06 April 2026 04:42:53 +0000 (0:00:02.686) 0:06:37.595 ********** 2026-04-06 04:43:09.254404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 04:43:09.254418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 04:43:09.254431 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:43:09.254465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 04:43:09.254477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 04:43:09.254488 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:43:09.254499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 04:43:09.254511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-06 04:43:09.254521 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:43:09.254532 | orchestrator | 2026-04-06 04:43:09.254543 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-06 04:43:09.254554 | orchestrator | Monday 06 April 2026 04:42:55 +0000 (0:00:02.644) 0:06:40.240 ********** 2026-04-06 04:43:09.254569 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:43:09.254583 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:43:09.254595 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:43:09.254624 | orchestrator | 2026-04-06 04:43:09.254648 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-06 04:43:09.254661 | orchestrator | Monday 06 April 2026 04:42:59 +0000 (0:00:03.506) 0:06:43.746 ********** 2026-04-06 04:43:09.254674 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:43:09.254695 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:43:09.254726 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:43:09.254739 | orchestrator | 2026-04-06 04:43:09.254753 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-06 04:43:09.254764 | orchestrator | Monday 06 April 2026 04:43:03 +0000 (0:00:04.165) 0:06:47.912 ********** 2026-04-06 04:43:09.254776 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-06 04:43:09.254788 | orchestrator | 2026-04-06 04:43:09.254799 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-06 04:43:09.254810 | orchestrator | Monday 06 April 2026 04:43:05 +0000 (0:00:01.957) 0:06:49.870 ********** 2026-04-06 04:43:09.254822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 04:43:09.254834 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:43:09.254846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 04:43:09.254857 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:43:09.254876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 04:43:09.254887 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:43:09.254898 | orchestrator | 2026-04-06 04:43:09.254910 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-06 04:43:09.254921 | orchestrator | Monday 06 April 2026 04:43:07 +0000 (0:00:02.220) 0:06:52.090 ********** 2026-04-06 04:43:09.254932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 04:43:09.254943 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:43:09.254955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 04:43:09.254966 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:43:09.254988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-06 04:43:43.948788 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:43:43.948903 | orchestrator | 2026-04-06 04:43:43.948947 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-06 04:43:43.948968 | orchestrator | Monday 06 April 2026 04:43:10 +0000 (0:00:02.574) 0:06:54.664 ********** 2026-04-06 04:43:43.948988 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:43:43.949008 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:43:43.949021 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:43:43.949032 | orchestrator | 2026-04-06 04:43:43.949043 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-06 04:43:43.949054 | orchestrator | Monday 06 April 2026 04:43:13 +0000 (0:00:02.889) 0:06:57.554 ********** 2026-04-06 04:43:43.949065 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:43:43.949077 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:43:43.949088 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:43:43.949098 | orchestrator | 2026-04-06 04:43:43.949110 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-06 04:43:43.949121 | orchestrator | Monday 06 April 2026 04:43:16 +0000 (0:00:03.467) 0:07:01.021 ********** 2026-04-06 04:43:43.949131 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:43:43.949143 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:43:43.949153 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:43:43.949190 | orchestrator | 2026-04-06 04:43:43.949201 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-06 04:43:43.949212 | orchestrator | Monday 06 April 2026 04:43:20 +0000 (0:00:03.985) 0:07:05.007 ********** 2026-04-06 04:43:43.949224 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-06 04:43:43.949236 | orchestrator | 2026-04-06 04:43:43.949247 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-06 04:43:43.949258 | orchestrator | Monday 06 April 2026 04:43:22 +0000 (0:00:01.677) 0:07:06.684 ********** 2026-04-06 04:43:43.949271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 04:43:43.949286 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:43:43.949298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 04:43:43.949342 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:43:43.949362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 04:43:43.949381 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:43:43.949399 | orchestrator | 2026-04-06 04:43:43.949420 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-06 04:43:43.949441 | orchestrator | Monday 06 April 2026 04:43:25 +0000 (0:00:02.827) 0:07:09.512 ********** 2026-04-06 04:43:43.949461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 04:43:43.949480 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:43:43.949535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 04:43:43.949549 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:43:43.949572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-06 04:43:43.949583 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:43:43.949594 | orchestrator | 2026-04-06 04:43:43.949605 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-06 04:43:43.949616 | orchestrator | Monday 06 April 2026 04:43:27 +0000 (0:00:02.604) 0:07:12.116 ********** 2026-04-06 04:43:43.949627 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:43:43.949638 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:43:43.949649 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:43:43.949660 | orchestrator | 2026-04-06 04:43:43.949671 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-06 04:43:43.949682 | orchestrator | Monday 06 April 2026 04:43:30 +0000 (0:00:02.518) 0:07:14.635 ********** 2026-04-06 04:43:43.949692 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:43:43.949703 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:43:43.949714 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:43:43.949725 | orchestrator | 2026-04-06 04:43:43.949736 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-06 04:43:43.949747 | orchestrator | Monday 06 April 2026 04:43:34 +0000 (0:00:03.679) 0:07:18.314 ********** 2026-04-06 04:43:43.949757 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:43:43.949768 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:43:43.949779 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:43:43.949789 | orchestrator | 2026-04-06 04:43:43.949800 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-06 04:43:43.949811 | orchestrator | Monday 06 April 2026 04:43:38 +0000 (0:00:04.227) 0:07:22.541 ********** 2026-04-06 04:43:43.949822 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:43:43.949833 | orchestrator | 2026-04-06 04:43:43.949844 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-06 04:43:43.949855 | orchestrator | Monday 06 April 2026 04:43:40 +0000 (0:00:02.106) 0:07:24.648 ********** 2026-04-06 04:43:43.949867 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 04:43:43.949880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 04:43:43.949926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 04:43:44.454139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 04:43:44.454244 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 04:43:44.454261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:43:44.454274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 04:43:44.454287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 04:43:44.454413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 04:43:44.454429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:43:44.454441 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 04:43:44.454453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 04:43:44.454465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 04:43:44.454476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 04:43:44.454500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:43:44.454513 | orchestrator | 2026-04-06 04:43:44.454533 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-06 04:43:46.168191 | orchestrator | Monday 06 April 2026 04:43:45 +0000 (0:00:05.132) 0:07:29.780 ********** 2026-04-06 04:43:46.168349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 04:43:46.168387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 04:43:46.168402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 04:43:46.168414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 04:43:46.168452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:43:46.168464 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:43:46.168497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 04:43:46.168511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 04:43:46.168523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 04:43:46.168535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 04:43:46.168591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:43:46.168613 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:43:46.168630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 04:43:46.168652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 04:44:02.702105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 04:44:02.702231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 04:44:02.702247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 04:44:02.702281 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:02.702295 | orchestrator | 2026-04-06 04:44:02.702306 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-06 04:44:02.702403 | orchestrator | Monday 06 April 2026 04:43:47 +0000 (0:00:01.804) 0:07:31.585 ********** 2026-04-06 04:44:02.702414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 04:44:02.702427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 04:44:02.702439 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:02.702449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 04:44:02.702459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 04:44:02.702469 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:02.702479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 04:44:02.702503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-06 04:44:02.702513 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:02.702523 | orchestrator | 2026-04-06 04:44:02.702533 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-06 04:44:02.702543 | orchestrator | Monday 06 April 2026 04:43:49 +0000 (0:00:01.801) 0:07:33.386 ********** 2026-04-06 04:44:02.702553 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:44:02.702563 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:44:02.702575 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:44:02.702587 | orchestrator | 2026-04-06 04:44:02.702599 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-06 04:44:02.702611 | orchestrator | Monday 06 April 2026 04:43:51 +0000 (0:00:02.519) 0:07:35.905 ********** 2026-04-06 04:44:02.702623 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:44:02.702635 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:44:02.702664 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:44:02.702676 | orchestrator | 2026-04-06 04:44:02.702688 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-06 04:44:02.702699 | orchestrator | Monday 06 April 2026 04:43:54 +0000 (0:00:03.024) 0:07:38.930 ********** 2026-04-06 04:44:02.702711 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:44:02.702724 | orchestrator | 2026-04-06 04:44:02.702736 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-06 04:44:02.702748 | orchestrator | Monday 06 April 2026 04:43:56 +0000 (0:00:02.221) 0:07:41.151 ********** 2026-04-06 04:44:02.702762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:44:02.702787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:44:02.702800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:44:02.702826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:44:04.738426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:44:04.738606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:44:04.738631 | orchestrator | 2026-04-06 04:44:04.738651 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-06 04:44:04.738669 | orchestrator | Monday 06 April 2026 04:44:04 +0000 (0:00:07.225) 0:07:48.377 ********** 2026-04-06 04:44:04.738705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:44:04.738748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:44:04.738776 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:04.738794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:44:04.738811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:44:04.738828 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:04.738850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:44:04.738881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:44:16.551957 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:16.552085 | orchestrator | 2026-04-06 04:44:16.552108 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-06 04:44:16.552125 | orchestrator | Monday 06 April 2026 04:44:05 +0000 (0:00:01.709) 0:07:50.087 ********** 2026-04-06 04:44:16.552140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:44:16.552159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-06 04:44:16.552177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-06 04:44:16.552193 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:16.552207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:44:16.552221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-06 04:44:16.552236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-06 04:44:16.552251 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:16.552265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:44:16.552296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-06 04:44:16.552362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-06 04:44:16.552380 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:16.552394 | orchestrator | 2026-04-06 04:44:16.552409 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-06 04:44:16.552424 | orchestrator | Monday 06 April 2026 04:44:07 +0000 (0:00:02.034) 0:07:52.121 ********** 2026-04-06 04:44:16.552437 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:16.552473 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:16.552489 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:16.552504 | orchestrator | 2026-04-06 04:44:16.552518 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-06 04:44:16.552533 | orchestrator | Monday 06 April 2026 04:44:09 +0000 (0:00:01.513) 0:07:53.634 ********** 2026-04-06 04:44:16.552546 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:16.552561 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:16.552576 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:16.552590 | orchestrator | 2026-04-06 04:44:16.552604 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-06 04:44:16.552619 | orchestrator | Monday 06 April 2026 04:44:11 +0000 (0:00:02.329) 0:07:55.964 ********** 2026-04-06 04:44:16.552632 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:44:16.552648 | orchestrator | 2026-04-06 04:44:16.552663 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-06 04:44:16.552676 | orchestrator | Monday 06 April 2026 04:44:14 +0000 (0:00:02.646) 0:07:58.611 ********** 2026-04-06 04:44:16.552717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-06 04:44:16.552738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-06 04:44:16.552762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 04:44:16.552790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 04:44:16.552807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:16.552832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:18.721062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:18.721162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:18.721179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 04:44:18.721192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 04:44:18.721224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-06 04:44:18.721266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 04:44:18.721298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:18.721311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:18.721433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 04:44:18.721463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:44:18.721495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:44:18.721529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-06 04:44:20.457100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:44:20.457194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.457242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-06 04:44:20.457253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.457261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.457285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-06 04:44:20.457293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 04:44:20.457300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.457392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.457408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 04:44:20.457416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.457424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 04:44:20.457431 | orchestrator | 2026-04-06 04:44:20.457440 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-06 04:44:20.457448 | orchestrator | Monday 06 April 2026 04:44:19 +0000 (0:00:05.644) 0:08:04.256 ********** 2026-04-06 04:44:20.457466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-06 04:44:20.618301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 04:44:20.618454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.618479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.618488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 04:44:20.618497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:44:20.618520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-06 04:44:20.618529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.618549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.618561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-06 04:44:20.618569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 04:44:20.618576 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:20.618584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 04:44:20.618591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.618604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.790603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 04:44:20.790700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:44:20.790711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-06 04:44:20.790720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.790728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.790766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 04:44:20.790773 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:20.790785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-06 04:44:20.790793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 04:44:20.790801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.790808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:20.790816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 04:44:20.790834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:44:33.237761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-06 04:44:33.237877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:33.237893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 04:44:33.237907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 04:44:33.237919 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:33.237932 | orchestrator | 2026-04-06 04:44:33.237944 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-06 04:44:33.237957 | orchestrator | Monday 06 April 2026 04:44:21 +0000 (0:00:01.957) 0:08:06.213 ********** 2026-04-06 04:44:33.237970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-06 04:44:33.238008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-06 04:44:33.238086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:44:33.238119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:44:33.238133 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:33.238145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-06 04:44:33.238164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-06 04:44:33.238177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:44:33.238189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:44:33.238201 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:33.238213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-06 04:44:33.238225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-06 04:44:33.238237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:44:33.238257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-06 04:44:33.238269 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:33.238281 | orchestrator | 2026-04-06 04:44:33.238295 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-06 04:44:33.238308 | orchestrator | Monday 06 April 2026 04:44:24 +0000 (0:00:02.199) 0:08:08.412 ********** 2026-04-06 04:44:33.238384 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:33.238399 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:33.238412 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:33.238425 | orchestrator | 2026-04-06 04:44:33.238438 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-06 04:44:33.238450 | orchestrator | Monday 06 April 2026 04:44:25 +0000 (0:00:01.747) 0:08:10.160 ********** 2026-04-06 04:44:33.238463 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:33.238476 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:33.238488 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:33.238501 | orchestrator | 2026-04-06 04:44:33.238512 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-06 04:44:33.238523 | orchestrator | Monday 06 April 2026 04:44:28 +0000 (0:00:02.288) 0:08:12.448 ********** 2026-04-06 04:44:33.238534 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:44:33.238545 | orchestrator | 2026-04-06 04:44:33.238555 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-06 04:44:33.238566 | orchestrator | Monday 06 April 2026 04:44:30 +0000 (0:00:02.691) 0:08:15.139 ********** 2026-04-06 04:44:33.238593 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:44:47.969438 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:44:47.969588 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:44:47.969609 | orchestrator | 2026-04-06 04:44:47.969624 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-06 04:44:47.969637 | orchestrator | Monday 06 April 2026 04:44:34 +0000 (0:00:03.547) 0:08:18.687 ********** 2026-04-06 04:44:47.969650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 04:44:47.969694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 04:44:47.969709 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:47.969722 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:47.969734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 04:44:47.969755 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:47.969766 | orchestrator | 2026-04-06 04:44:47.969778 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-06 04:44:47.969789 | orchestrator | Monday 06 April 2026 04:44:35 +0000 (0:00:01.474) 0:08:20.162 ********** 2026-04-06 04:44:47.969801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-06 04:44:47.969812 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:47.969824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-06 04:44:47.969835 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:47.969846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-06 04:44:47.969856 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:47.969867 | orchestrator | 2026-04-06 04:44:47.969878 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-06 04:44:47.969889 | orchestrator | Monday 06 April 2026 04:44:37 +0000 (0:00:01.816) 0:08:21.978 ********** 2026-04-06 04:44:47.969899 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:47.969910 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:47.969923 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:47.969936 | orchestrator | 2026-04-06 04:44:47.969949 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-06 04:44:47.969960 | orchestrator | Monday 06 April 2026 04:44:39 +0000 (0:00:01.498) 0:08:23.477 ********** 2026-04-06 04:44:47.969973 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:47.969986 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:44:47.969999 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:44:47.970011 | orchestrator | 2026-04-06 04:44:47.970091 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-06 04:44:47.970105 | orchestrator | Monday 06 April 2026 04:44:41 +0000 (0:00:02.217) 0:08:25.694 ********** 2026-04-06 04:44:47.970117 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:44:47.970128 | orchestrator | 2026-04-06 04:44:47.970139 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-06 04:44:47.970150 | orchestrator | Monday 06 April 2026 04:44:44 +0000 (0:00:02.749) 0:08:28.444 ********** 2026-04-06 04:44:47.970163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 04:44:47.970234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 04:44:52.631469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 04:44:52.631578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 04:44:52.631614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 04:44:52.631670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 04:44:52.631685 | orchestrator | 2026-04-06 04:44:52.631699 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-06 04:44:52.631710 | orchestrator | Monday 06 April 2026 04:44:52 +0000 (0:00:07.960) 0:08:36.405 ********** 2026-04-06 04:44:52.631723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 04:44:52.631736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 04:44:52.631749 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:44:52.631768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 04:44:52.631816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 04:45:13.424626 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:45:13.424783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 04:45:13.424818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 04:45:13.424873 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:45:13.424887 | orchestrator | 2026-04-06 04:45:13.424899 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-06 04:45:13.424911 | orchestrator | Monday 06 April 2026 04:44:54 +0000 (0:00:02.188) 0:08:38.594 ********** 2026-04-06 04:45:13.424940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-06 04:45:13.424954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-06 04:45:13.424967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:45:13.424980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:45:13.424991 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:45:13.425002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-06 04:45:13.425014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-06 04:45:13.425044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:45:13.425056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:45:13.425067 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:45:13.425078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-06 04:45:13.425090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-06 04:45:13.425101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:45:13.425112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-06 04:45:13.425131 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:45:13.425144 | orchestrator | 2026-04-06 04:45:13.425157 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-06 04:45:13.425170 | orchestrator | Monday 06 April 2026 04:44:56 +0000 (0:00:02.364) 0:08:40.958 ********** 2026-04-06 04:45:13.425182 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:45:13.425195 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:45:13.425208 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:45:13.425220 | orchestrator | 2026-04-06 04:45:13.425233 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-06 04:45:13.425247 | orchestrator | Monday 06 April 2026 04:44:58 +0000 (0:00:02.303) 0:08:43.262 ********** 2026-04-06 04:45:13.425259 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:45:13.425272 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:45:13.425284 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:45:13.425296 | orchestrator | 2026-04-06 04:45:13.425309 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-06 04:45:13.425322 | orchestrator | Monday 06 April 2026 04:45:02 +0000 (0:00:03.068) 0:08:46.332 ********** 2026-04-06 04:45:13.425407 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:45:13.425422 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:45:13.425435 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:45:13.425447 | orchestrator | 2026-04-06 04:45:13.425460 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-06 04:45:13.425473 | orchestrator | Monday 06 April 2026 04:45:03 +0000 (0:00:01.451) 0:08:47.783 ********** 2026-04-06 04:45:13.425486 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:45:13.425497 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:45:13.425508 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:45:13.425519 | orchestrator | 2026-04-06 04:45:13.425530 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-06 04:45:13.425541 | orchestrator | Monday 06 April 2026 04:45:04 +0000 (0:00:01.385) 0:08:49.169 ********** 2026-04-06 04:45:13.425552 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:45:13.425562 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:45:13.425573 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:45:13.425584 | orchestrator | 2026-04-06 04:45:13.425595 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-06 04:45:13.425606 | orchestrator | Monday 06 April 2026 04:45:06 +0000 (0:00:01.389) 0:08:50.559 ********** 2026-04-06 04:45:13.425616 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:45:13.425627 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:45:13.425638 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:45:13.425649 | orchestrator | 2026-04-06 04:45:13.425660 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-06 04:45:13.425670 | orchestrator | Monday 06 April 2026 04:45:07 +0000 (0:00:01.391) 0:08:51.950 ********** 2026-04-06 04:45:13.425685 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:45:13.425705 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:45:13.425723 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:45:13.425743 | orchestrator | 2026-04-06 04:45:13.425762 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-06 04:45:13.425782 | orchestrator | Monday 06 April 2026 04:45:09 +0000 (0:00:01.856) 0:08:53.807 ********** 2026-04-06 04:45:13.425802 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:45:13.425822 | orchestrator | 2026-04-06 04:45:13.425841 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-06 04:45:13.425862 | orchestrator | Monday 06 April 2026 04:45:11 +0000 (0:00:02.374) 0:08:56.181 ********** 2026-04-06 04:45:13.425897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-06 04:45:18.415200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-06 04:45:18.415328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-06 04:45:18.415438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:45:18.415460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:45:18.415479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-06 04:45:18.415498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:45:18.415569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:45:18.415582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-06 04:45:18.415599 | orchestrator | 2026-04-06 04:45:18.415618 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-06 04:45:18.415636 | orchestrator | Monday 06 April 2026 04:45:16 +0000 (0:00:04.547) 0:09:00.729 ********** 2026-04-06 04:45:18.415654 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 04:45:18.415672 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:45:18.415689 | orchestrator | } 2026-04-06 04:45:18.415706 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 04:45:18.415723 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:45:18.415740 | orchestrator | } 2026-04-06 04:45:18.415757 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 04:45:18.415773 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:45:18.415789 | orchestrator | } 2026-04-06 04:45:18.415807 | orchestrator | 2026-04-06 04:45:18.415824 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 04:45:18.415841 | orchestrator | Monday 06 April 2026 04:45:17 +0000 (0:00:01.508) 0:09:02.238 ********** 2026-04-06 04:45:18.415865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-06 04:45:18.415886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:45:18.415903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:45:18.415932 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:45:18.415951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-06 04:45:18.415982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:47:22.243491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:47:22.243614 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:47:22.243633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-06 04:47:22.243663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-06 04:47:22.243675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-06 04:47:22.243712 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:47:22.243724 | orchestrator | 2026-04-06 04:47:22.243736 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-06 04:47:22.243749 | orchestrator | Monday 06 April 2026 04:45:20 +0000 (0:00:02.468) 0:09:04.706 ********** 2026-04-06 04:47:22.243759 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:47:22.243771 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:47:22.243782 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:47:22.243793 | orchestrator | 2026-04-06 04:47:22.243803 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-06 04:47:22.243814 | orchestrator | Monday 06 April 2026 04:45:22 +0000 (0:00:01.779) 0:09:06.485 ********** 2026-04-06 04:47:22.243825 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:47:22.243835 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:47:22.243846 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:47:22.243857 | orchestrator | 2026-04-06 04:47:22.243867 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-06 04:47:22.243878 | orchestrator | Monday 06 April 2026 04:45:23 +0000 (0:00:01.518) 0:09:08.004 ********** 2026-04-06 04:47:22.243894 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:47:22.243906 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:47:22.243917 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:47:22.243928 | orchestrator | 2026-04-06 04:47:22.243939 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-06 04:47:22.243950 | orchestrator | Monday 06 April 2026 04:45:30 +0000 (0:00:07.145) 0:09:15.150 ********** 2026-04-06 04:47:22.243961 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:47:22.243971 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:47:22.243982 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:47:22.243993 | orchestrator | 2026-04-06 04:47:22.244008 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-06 04:47:22.244021 | orchestrator | Monday 06 April 2026 04:45:38 +0000 (0:00:07.138) 0:09:22.288 ********** 2026-04-06 04:47:22.244033 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:47:22.244046 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:47:22.244058 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:47:22.244070 | orchestrator | 2026-04-06 04:47:22.244082 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-06 04:47:22.244096 | orchestrator | Monday 06 April 2026 04:45:45 +0000 (0:00:07.017) 0:09:29.306 ********** 2026-04-06 04:47:22.244108 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:47:22.244120 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:47:22.244133 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:47:22.244146 | orchestrator | 2026-04-06 04:47:22.244175 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-06 04:47:22.244189 | orchestrator | Monday 06 April 2026 04:45:52 +0000 (0:00:07.733) 0:09:37.040 ********** 2026-04-06 04:47:22.244201 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:47:22.244214 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:47:22.244226 | orchestrator | 2026-04-06 04:47:22.244239 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-06 04:47:22.244251 | orchestrator | Monday 06 April 2026 04:45:56 +0000 (0:00:03.805) 0:09:40.845 ********** 2026-04-06 04:47:22.244263 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:47:22.244276 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:47:22.244288 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:47:22.244300 | orchestrator | 2026-04-06 04:47:22.244314 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-06 04:47:22.244335 | orchestrator | Monday 06 April 2026 04:46:09 +0000 (0:00:13.402) 0:09:54.247 ********** 2026-04-06 04:47:22.244347 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:47:22.244360 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:47:22.244394 | orchestrator | 2026-04-06 04:47:22.244405 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-06 04:47:22.244415 | orchestrator | Monday 06 April 2026 04:46:14 +0000 (0:00:04.624) 0:09:58.872 ********** 2026-04-06 04:47:22.244426 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:47:22.244437 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:47:22.244448 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:47:22.244458 | orchestrator | 2026-04-06 04:47:22.244469 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-06 04:47:22.244480 | orchestrator | Monday 06 April 2026 04:46:22 +0000 (0:00:07.490) 0:10:06.362 ********** 2026-04-06 04:47:22.244491 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:47:22.244501 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:47:22.244512 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:47:22.244523 | orchestrator | 2026-04-06 04:47:22.244539 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-06 04:47:22.244551 | orchestrator | Monday 06 April 2026 04:46:28 +0000 (0:00:06.830) 0:10:13.192 ********** 2026-04-06 04:47:22.244561 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:47:22.244572 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:47:22.244583 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:47:22.244594 | orchestrator | 2026-04-06 04:47:22.244604 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-06 04:47:22.244615 | orchestrator | Monday 06 April 2026 04:46:35 +0000 (0:00:06.873) 0:10:20.066 ********** 2026-04-06 04:47:22.244626 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:47:22.244637 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:47:22.244648 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:47:22.244658 | orchestrator | 2026-04-06 04:47:22.244669 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-06 04:47:22.244680 | orchestrator | Monday 06 April 2026 04:46:42 +0000 (0:00:06.820) 0:10:26.886 ********** 2026-04-06 04:47:22.244691 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:47:22.244702 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:47:22.244712 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:47:22.244727 | orchestrator | 2026-04-06 04:47:22.244744 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-04-06 04:47:22.244763 | orchestrator | Monday 06 April 2026 04:46:50 +0000 (0:00:07.671) 0:10:34.558 ********** 2026-04-06 04:47:22.244780 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:47:22.244796 | orchestrator | 2026-04-06 04:47:22.244815 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-06 04:47:22.244833 | orchestrator | Monday 06 April 2026 04:46:53 +0000 (0:00:03.625) 0:10:38.184 ********** 2026-04-06 04:47:22.244852 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:47:22.244864 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:47:22.244875 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:47:22.244886 | orchestrator | 2026-04-06 04:47:22.244897 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-04-06 04:47:22.244908 | orchestrator | Monday 06 April 2026 04:47:07 +0000 (0:00:13.159) 0:10:51.344 ********** 2026-04-06 04:47:22.244919 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:47:22.244930 | orchestrator | 2026-04-06 04:47:22.244941 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-06 04:47:22.244952 | orchestrator | Monday 06 April 2026 04:47:11 +0000 (0:00:04.599) 0:10:55.943 ********** 2026-04-06 04:47:22.244962 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:47:22.244974 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:47:22.244985 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:47:22.244996 | orchestrator | 2026-04-06 04:47:22.245006 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-06 04:47:22.245025 | orchestrator | Monday 06 April 2026 04:47:18 +0000 (0:00:07.107) 0:11:03.051 ********** 2026-04-06 04:47:22.245036 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:47:22.245047 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:47:22.245058 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:47:22.245069 | orchestrator | 2026-04-06 04:47:22.245080 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-06 04:47:22.245091 | orchestrator | Monday 06 April 2026 04:47:21 +0000 (0:00:02.580) 0:11:05.631 ********** 2026-04-06 04:47:22.245102 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:47:22.245113 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:47:22.245124 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:47:22.245134 | orchestrator | 2026-04-06 04:47:22.245145 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:47:22.245157 | orchestrator | testbed-node-0 : ok=129  changed=30  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-06 04:47:22.245170 | orchestrator | testbed-node-1 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-06 04:47:22.245190 | orchestrator | testbed-node-2 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-06 04:47:24.600655 | orchestrator | 2026-04-06 04:47:24.600797 | orchestrator | 2026-04-06 04:47:24.600825 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:47:24.600847 | orchestrator | Monday 06 April 2026 04:47:23 +0000 (0:00:02.413) 0:11:08.045 ********** 2026-04-06 04:47:24.600867 | orchestrator | =============================================================================== 2026-04-06 04:47:24.600886 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.40s 2026-04-06 04:47:24.600906 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 13.16s 2026-04-06 04:47:24.600926 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.96s 2026-04-06 04:47:24.600944 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.73s 2026-04-06 04:47:24.600964 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.67s 2026-04-06 04:47:24.600983 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.49s 2026-04-06 04:47:24.601003 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.23s 2026-04-06 04:47:24.601022 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.15s 2026-04-06 04:47:24.601043 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.14s 2026-04-06 04:47:24.601062 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 7.11s 2026-04-06 04:47:24.601082 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.02s 2026-04-06 04:47:24.601102 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.96s 2026-04-06 04:47:24.601142 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.87s 2026-04-06 04:47:24.601162 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.83s 2026-04-06 04:47:24.601181 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.82s 2026-04-06 04:47:24.601201 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.10s 2026-04-06 04:47:24.601222 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.84s 2026-04-06 04:47:24.601241 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.65s 2026-04-06 04:47:24.601261 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.36s 2026-04-06 04:47:24.601280 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.31s 2026-04-06 04:47:24.807842 | orchestrator | + osism apply -a upgrade opensearch 2026-04-06 04:47:26.157032 | orchestrator | 2026-04-06 04:47:26 | INFO  | Prepare task for execution of opensearch. 2026-04-06 04:47:26.223006 | orchestrator | 2026-04-06 04:47:26 | INFO  | Task e2803e19-46c3-4ad8-8ce6-9d072bacd657 (opensearch) was prepared for execution. 2026-04-06 04:47:26.223126 | orchestrator | 2026-04-06 04:47:26 | INFO  | It takes a moment until task e2803e19-46c3-4ad8-8ce6-9d072bacd657 (opensearch) has been started and output is visible here. 2026-04-06 04:47:44.321864 | orchestrator | 2026-04-06 04:47:44.321982 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 04:47:44.321999 | orchestrator | 2026-04-06 04:47:44.322012 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 04:47:44.322091 | orchestrator | Monday 06 April 2026 04:47:31 +0000 (0:00:01.666) 0:00:01.666 ********** 2026-04-06 04:47:44.322103 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:47:44.322114 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:47:44.322125 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:47:44.322136 | orchestrator | 2026-04-06 04:47:44.322147 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 04:47:44.322158 | orchestrator | Monday 06 April 2026 04:47:33 +0000 (0:00:01.949) 0:00:03.615 ********** 2026-04-06 04:47:44.322170 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-06 04:47:44.322182 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-06 04:47:44.322207 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-06 04:47:44.322219 | orchestrator | 2026-04-06 04:47:44.322230 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-06 04:47:44.322241 | orchestrator | 2026-04-06 04:47:44.322252 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-06 04:47:44.322263 | orchestrator | Monday 06 April 2026 04:47:35 +0000 (0:00:02.347) 0:00:05.963 ********** 2026-04-06 04:47:44.322274 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:47:44.322285 | orchestrator | 2026-04-06 04:47:44.322296 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-06 04:47:44.322308 | orchestrator | Monday 06 April 2026 04:47:38 +0000 (0:00:03.153) 0:00:09.116 ********** 2026-04-06 04:47:44.322319 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-06 04:47:44.322330 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-06 04:47:44.322341 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-06 04:47:44.322352 | orchestrator | 2026-04-06 04:47:44.322363 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-06 04:47:44.322396 | orchestrator | Monday 06 April 2026 04:47:41 +0000 (0:00:02.542) 0:00:11.659 ********** 2026-04-06 04:47:44.322413 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:47:44.322450 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:47:44.322511 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:47:44.322528 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:47:44.322544 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:47:44.322573 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:47:44.322587 | orchestrator | 2026-04-06 04:47:44.322601 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-06 04:47:44.322614 | orchestrator | Monday 06 April 2026 04:47:43 +0000 (0:00:02.356) 0:00:14.016 ********** 2026-04-06 04:47:44.322627 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:47:44.322640 | orchestrator | 2026-04-06 04:47:44.322660 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-06 04:47:49.210616 | orchestrator | Monday 06 April 2026 04:47:45 +0000 (0:00:01.865) 0:00:15.882 ********** 2026-04-06 04:47:49.210741 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:47:49.210761 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:47:49.210772 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:47:49.210826 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:47:49.210861 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:47:49.210874 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:47:49.210892 | orchestrator | 2026-04-06 04:47:49.210903 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-06 04:47:49.210913 | orchestrator | Monday 06 April 2026 04:47:48 +0000 (0:00:03.325) 0:00:19.207 ********** 2026-04-06 04:47:49.210928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:47:49.210948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:47:51.581111 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:47:51.581204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:47:51.581221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:47:51.581251 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:47:51.581273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:47:51.581299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:47:51.581310 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:47:51.581318 | orchestrator | 2026-04-06 04:47:51.581327 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-06 04:47:51.581336 | orchestrator | Monday 06 April 2026 04:47:50 +0000 (0:00:02.046) 0:00:21.254 ********** 2026-04-06 04:47:51.581345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:47:51.581360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:47:51.581369 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:47:51.581429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:47:51.581446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:47:55.304157 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:47:55.304269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:47:55.304328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:47:55.304344 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:47:55.304357 | orchestrator | 2026-04-06 04:47:55.304370 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-06 04:47:55.304463 | orchestrator | Monday 06 April 2026 04:47:52 +0000 (0:00:02.036) 0:00:23.290 ********** 2026-04-06 04:47:55.304476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:47:55.304508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:47:55.304521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:47:55.304549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:47:55.304563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:47:55.304585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:48:08.328800 | orchestrator | 2026-04-06 04:48:08.328914 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-06 04:48:08.328931 | orchestrator | Monday 06 April 2026 04:47:56 +0000 (0:00:03.690) 0:00:26.981 ********** 2026-04-06 04:48:08.328943 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:48:08.328956 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:48:08.328967 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:48:08.328978 | orchestrator | 2026-04-06 04:48:08.328990 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-06 04:48:08.329001 | orchestrator | Monday 06 April 2026 04:47:59 +0000 (0:00:03.421) 0:00:30.403 ********** 2026-04-06 04:48:08.329012 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:48:08.329023 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:48:08.329034 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:48:08.329045 | orchestrator | 2026-04-06 04:48:08.329056 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-06 04:48:08.329067 | orchestrator | Monday 06 April 2026 04:48:03 +0000 (0:00:03.311) 0:00:33.714 ********** 2026-04-06 04:48:08.329080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:48:08.329112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:48:08.329125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 04:48:08.329179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:48:08.329200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:48:08.329215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-06 04:48:08.329227 | orchestrator | 2026-04-06 04:48:08.329239 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-06 04:48:08.329251 | orchestrator | Monday 06 April 2026 04:48:06 +0000 (0:00:03.374) 0:00:37.089 ********** 2026-04-06 04:48:08.329263 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 04:48:08.329280 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:48:08.329310 | orchestrator | } 2026-04-06 04:48:08.329322 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 04:48:08.329334 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:48:08.329348 | orchestrator | } 2026-04-06 04:48:08.329361 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 04:48:08.329374 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:48:08.329415 | orchestrator | } 2026-04-06 04:48:08.329428 | orchestrator | 2026-04-06 04:48:08.329441 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 04:48:08.329454 | orchestrator | Monday 06 April 2026 04:48:07 +0000 (0:00:01.381) 0:00:38.471 ********** 2026-04-06 04:48:08.329478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:51:19.735668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:51:19.735814 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:51:19.735854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:51:19.735871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:51:19.735907 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:51:19.735938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 04:51:19.735956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-06 04:51:19.735968 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:51:19.735980 | orchestrator | 2026-04-06 04:51:19.735992 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-06 04:51:19.736005 | orchestrator | Monday 06 April 2026 04:48:10 +0000 (0:00:02.393) 0:00:40.865 ********** 2026-04-06 04:51:19.736016 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:51:19.736027 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:51:19.736038 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:51:19.736049 | orchestrator | 2026-04-06 04:51:19.736059 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-06 04:51:19.736071 | orchestrator | Monday 06 April 2026 04:48:11 +0000 (0:00:01.356) 0:00:42.221 ********** 2026-04-06 04:51:19.736081 | orchestrator | 2026-04-06 04:51:19.736092 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-06 04:51:19.736103 | orchestrator | Monday 06 April 2026 04:48:12 +0000 (0:00:00.470) 0:00:42.691 ********** 2026-04-06 04:51:19.736123 | orchestrator | 2026-04-06 04:51:19.736134 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-06 04:51:19.736144 | orchestrator | Monday 06 April 2026 04:48:12 +0000 (0:00:00.443) 0:00:43.135 ********** 2026-04-06 04:51:19.736155 | orchestrator | 2026-04-06 04:51:19.736166 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-06 04:51:19.736176 | orchestrator | Monday 06 April 2026 04:48:13 +0000 (0:00:00.845) 0:00:43.980 ********** 2026-04-06 04:51:19.736187 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:51:19.736200 | orchestrator | 2026-04-06 04:51:19.736213 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-06 04:51:19.736226 | orchestrator | Monday 06 April 2026 04:48:17 +0000 (0:00:03.721) 0:00:47.701 ********** 2026-04-06 04:51:19.736238 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:51:19.736250 | orchestrator | 2026-04-06 04:51:19.736264 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-06 04:51:19.736276 | orchestrator | Monday 06 April 2026 04:48:22 +0000 (0:00:04.887) 0:00:52.589 ********** 2026-04-06 04:51:19.736289 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:51:19.736302 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:51:19.736314 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:51:19.736327 | orchestrator | 2026-04-06 04:51:19.736340 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-06 04:51:19.736352 | orchestrator | Monday 06 April 2026 04:49:31 +0000 (0:01:09.806) 0:02:02.396 ********** 2026-04-06 04:51:19.736364 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:51:19.736377 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:51:19.736389 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:51:19.736401 | orchestrator | 2026-04-06 04:51:19.736414 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-06 04:51:19.736426 | orchestrator | Monday 06 April 2026 04:51:07 +0000 (0:01:35.837) 0:03:38.233 ********** 2026-04-06 04:51:19.736440 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:51:19.736452 | orchestrator | 2026-04-06 04:51:19.736491 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-06 04:51:19.736502 | orchestrator | Monday 06 April 2026 04:51:09 +0000 (0:00:01.846) 0:03:40.080 ********** 2026-04-06 04:51:19.736513 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:51:19.736524 | orchestrator | 2026-04-06 04:51:19.736534 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-06 04:51:19.736545 | orchestrator | Monday 06 April 2026 04:51:13 +0000 (0:00:03.414) 0:03:43.494 ********** 2026-04-06 04:51:19.736556 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:51:19.736567 | orchestrator | 2026-04-06 04:51:19.736577 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-06 04:51:19.736588 | orchestrator | Monday 06 April 2026 04:51:16 +0000 (0:00:03.228) 0:03:46.723 ********** 2026-04-06 04:51:19.736599 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:51:19.736610 | orchestrator | 2026-04-06 04:51:19.736621 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-06 04:51:19.736639 | orchestrator | Monday 06 April 2026 04:51:19 +0000 (0:00:03.488) 0:03:50.211 ********** 2026-04-06 04:51:23.139176 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:51:23.139259 | orchestrator | 2026-04-06 04:51:23.139269 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-06 04:51:23.139277 | orchestrator | Monday 06 April 2026 04:51:20 +0000 (0:00:01.243) 0:03:51.454 ********** 2026-04-06 04:51:23.139284 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:51:23.139290 | orchestrator | 2026-04-06 04:51:23.139297 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:51:23.139304 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 04:51:23.139333 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 04:51:23.139340 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 04:51:23.139346 | orchestrator | 2026-04-06 04:51:23.139352 | orchestrator | 2026-04-06 04:51:23.139358 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:51:23.139364 | orchestrator | Monday 06 April 2026 04:51:22 +0000 (0:00:01.696) 0:03:53.151 ********** 2026-04-06 04:51:23.139371 | orchestrator | =============================================================================== 2026-04-06 04:51:23.139377 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 95.84s 2026-04-06 04:51:23.139383 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.81s 2026-04-06 04:51:23.139389 | orchestrator | opensearch : Perform a flush -------------------------------------------- 4.89s 2026-04-06 04:51:23.139395 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.72s 2026-04-06 04:51:23.139402 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.69s 2026-04-06 04:51:23.139408 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.49s 2026-04-06 04:51:23.139414 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.42s 2026-04-06 04:51:23.139420 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.41s 2026-04-06 04:51:23.139427 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.37s 2026-04-06 04:51:23.139508 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.33s 2026-04-06 04:51:23.139517 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.31s 2026-04-06 04:51:23.139524 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 3.23s 2026-04-06 04:51:23.139530 | orchestrator | opensearch : include_tasks ---------------------------------------------- 3.15s 2026-04-06 04:51:23.139536 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.54s 2026-04-06 04:51:23.139542 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.39s 2026-04-06 04:51:23.139548 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.36s 2026-04-06 04:51:23.139555 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.35s 2026-04-06 04:51:23.139561 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 2.05s 2026-04-06 04:51:23.139568 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 2.03s 2026-04-06 04:51:23.139574 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.95s 2026-04-06 04:51:23.340162 | orchestrator | + osism apply -a upgrade memcached 2026-04-06 04:51:24.687950 | orchestrator | 2026-04-06 04:51:24 | INFO  | Prepare task for execution of memcached. 2026-04-06 04:51:24.761776 | orchestrator | 2026-04-06 04:51:24 | INFO  | Task 9805cf5b-1eb0-423d-8798-d136748bc172 (memcached) was prepared for execution. 2026-04-06 04:51:24.761859 | orchestrator | 2026-04-06 04:51:24 | INFO  | It takes a moment until task 9805cf5b-1eb0-423d-8798-d136748bc172 (memcached) has been started and output is visible here. 2026-04-06 04:52:01.659441 | orchestrator | 2026-04-06 04:52:01.659588 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 04:52:01.659601 | orchestrator | 2026-04-06 04:52:01.659607 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 04:52:01.659614 | orchestrator | Monday 06 April 2026 04:51:31 +0000 (0:00:03.022) 0:00:03.022 ********** 2026-04-06 04:52:01.659621 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:52:01.659628 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:52:01.659657 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:52:01.659662 | orchestrator | 2026-04-06 04:52:01.659668 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 04:52:01.659674 | orchestrator | Monday 06 April 2026 04:51:33 +0000 (0:00:02.026) 0:00:05.048 ********** 2026-04-06 04:52:01.659682 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-06 04:52:01.659688 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-06 04:52:01.659694 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-06 04:52:01.659700 | orchestrator | 2026-04-06 04:52:01.659706 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-06 04:52:01.659712 | orchestrator | 2026-04-06 04:52:01.659720 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-06 04:52:01.659727 | orchestrator | Monday 06 April 2026 04:51:34 +0000 (0:00:01.822) 0:00:06.870 ********** 2026-04-06 04:52:01.659734 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:52:01.659741 | orchestrator | 2026-04-06 04:52:01.659748 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-06 04:52:01.659754 | orchestrator | Monday 06 April 2026 04:51:40 +0000 (0:00:05.143) 0:00:12.014 ********** 2026-04-06 04:52:01.659761 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-06 04:52:01.659768 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-06 04:52:01.659775 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-06 04:52:01.659782 | orchestrator | 2026-04-06 04:52:01.659789 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-06 04:52:01.659796 | orchestrator | Monday 06 April 2026 04:51:42 +0000 (0:00:02.300) 0:00:14.315 ********** 2026-04-06 04:52:01.659803 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-06 04:52:01.659809 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-06 04:52:01.659816 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-06 04:52:01.659823 | orchestrator | 2026-04-06 04:52:01.659830 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-06 04:52:01.659837 | orchestrator | Monday 06 April 2026 04:51:45 +0000 (0:00:02.638) 0:00:16.954 ********** 2026-04-06 04:52:01.659860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-06 04:52:01.659871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-06 04:52:01.659894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-06 04:52:01.659910 | orchestrator | 2026-04-06 04:52:01.659917 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-06 04:52:01.659924 | orchestrator | Monday 06 April 2026 04:51:47 +0000 (0:00:02.345) 0:00:19.300 ********** 2026-04-06 04:52:01.659932 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 04:52:01.659939 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:52:01.659946 | orchestrator | } 2026-04-06 04:52:01.659952 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 04:52:01.659958 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:52:01.659964 | orchestrator | } 2026-04-06 04:52:01.659970 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 04:52:01.659976 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:52:01.659982 | orchestrator | } 2026-04-06 04:52:01.659989 | orchestrator | 2026-04-06 04:52:01.659995 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 04:52:01.660001 | orchestrator | Monday 06 April 2026 04:51:49 +0000 (0:00:01.585) 0:00:20.885 ********** 2026-04-06 04:52:01.660008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-06 04:52:01.660015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-06 04:52:01.660021 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:52:01.660033 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:52:01.660041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-06 04:52:01.660053 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:52:01.660060 | orchestrator | 2026-04-06 04:52:01.660066 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-06 04:52:01.660072 | orchestrator | Monday 06 April 2026 04:51:51 +0000 (0:00:01.998) 0:00:22.883 ********** 2026-04-06 04:52:01.660078 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:52:01.660085 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:52:01.660090 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:52:01.660097 | orchestrator | 2026-04-06 04:52:01.660103 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:52:01.660111 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 04:52:01.660119 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 04:52:01.660126 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 04:52:01.660132 | orchestrator | 2026-04-06 04:52:01.660139 | orchestrator | 2026-04-06 04:52:01.660146 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:52:01.660159 | orchestrator | Monday 06 April 2026 04:52:01 +0000 (0:00:10.646) 0:00:33.530 ********** 2026-04-06 04:52:01.990186 | orchestrator | =============================================================================== 2026-04-06 04:52:01.990285 | orchestrator | memcached : Restart memcached container -------------------------------- 10.65s 2026-04-06 04:52:01.990299 | orchestrator | memcached : include_tasks ----------------------------------------------- 5.14s 2026-04-06 04:52:01.990311 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.64s 2026-04-06 04:52:01.990322 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.35s 2026-04-06 04:52:01.990333 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.30s 2026-04-06 04:52:01.990344 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.03s 2026-04-06 04:52:01.990355 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.00s 2026-04-06 04:52:01.990367 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.82s 2026-04-06 04:52:01.990378 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.58s 2026-04-06 04:52:02.190942 | orchestrator | + osism apply -a upgrade redis 2026-04-06 04:52:03.505647 | orchestrator | 2026-04-06 04:52:03 | INFO  | Prepare task for execution of redis. 2026-04-06 04:52:03.570343 | orchestrator | 2026-04-06 04:52:03 | INFO  | Task 08fd8a9f-7629-4a22-993f-dbf94ca393d3 (redis) was prepared for execution. 2026-04-06 04:52:03.570437 | orchestrator | 2026-04-06 04:52:03 | INFO  | It takes a moment until task 08fd8a9f-7629-4a22-993f-dbf94ca393d3 (redis) has been started and output is visible here. 2026-04-06 04:52:20.199109 | orchestrator | 2026-04-06 04:52:20.199218 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 04:52:20.199235 | orchestrator | 2026-04-06 04:52:20.199246 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 04:52:20.199257 | orchestrator | Monday 06 April 2026 04:52:08 +0000 (0:00:02.035) 0:00:02.035 ********** 2026-04-06 04:52:20.199267 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:52:20.199278 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:52:20.199289 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:52:20.199298 | orchestrator | 2026-04-06 04:52:20.199307 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 04:52:20.199316 | orchestrator | Monday 06 April 2026 04:52:10 +0000 (0:00:01.761) 0:00:03.796 ********** 2026-04-06 04:52:20.199350 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-06 04:52:20.199361 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-06 04:52:20.199372 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-06 04:52:20.199382 | orchestrator | 2026-04-06 04:52:20.199392 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-06 04:52:20.199400 | orchestrator | 2026-04-06 04:52:20.199410 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-06 04:52:20.199419 | orchestrator | Monday 06 April 2026 04:52:12 +0000 (0:00:01.785) 0:00:05.582 ********** 2026-04-06 04:52:20.199443 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:52:20.199456 | orchestrator | 2026-04-06 04:52:20.199467 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-06 04:52:20.199477 | orchestrator | Monday 06 April 2026 04:52:15 +0000 (0:00:03.102) 0:00:08.685 ********** 2026-04-06 04:52:20.199491 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:20.199534 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:20.199548 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:20.199561 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:20.199595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:20.199616 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:20.199627 | orchestrator | 2026-04-06 04:52:20.199644 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-06 04:52:20.199655 | orchestrator | Monday 06 April 2026 04:52:18 +0000 (0:00:02.727) 0:00:11.413 ********** 2026-04-06 04:52:20.199667 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:20.199680 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:20.199692 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:20.199705 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:20.199723 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.458742 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.458843 | orchestrator | 2026-04-06 04:52:27.458857 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-06 04:52:27.458882 | orchestrator | Monday 06 April 2026 04:52:21 +0000 (0:00:03.132) 0:00:14.545 ********** 2026-04-06 04:52:27.458894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.458905 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.458915 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.458924 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.458933 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.458979 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.458989 | orchestrator | 2026-04-06 04:52:27.458998 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-06 04:52:27.459007 | orchestrator | Monday 06 April 2026 04:52:25 +0000 (0:00:04.116) 0:00:18.662 ********** 2026-04-06 04:52:27.459021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.459031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.459041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.459050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.459066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:27.459084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-06 04:52:56.244823 | orchestrator | 2026-04-06 04:52:56.244952 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-06 04:52:56.244969 | orchestrator | Monday 06 April 2026 04:52:28 +0000 (0:00:03.058) 0:00:21.721 ********** 2026-04-06 04:52:56.244982 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 04:52:56.244995 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:52:56.245007 | orchestrator | } 2026-04-06 04:52:56.245018 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 04:52:56.245046 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:52:56.245058 | orchestrator | } 2026-04-06 04:52:56.245070 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 04:52:56.245081 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:52:56.245092 | orchestrator | } 2026-04-06 04:52:56.245103 | orchestrator | 2026-04-06 04:52:56.245114 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 04:52:56.245125 | orchestrator | Monday 06 April 2026 04:52:29 +0000 (0:00:01.414) 0:00:23.136 ********** 2026-04-06 04:52:56.245140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-06 04:52:56.245154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-06 04:52:56.245167 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:52:56.245178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-06 04:52:56.245215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-06 04:52:56.245227 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:52:56.245238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-06 04:52:56.245274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-06 04:52:56.245287 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:52:56.245298 | orchestrator | 2026-04-06 04:52:56.245309 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-06 04:52:56.245320 | orchestrator | Monday 06 April 2026 04:52:32 +0000 (0:00:02.069) 0:00:25.206 ********** 2026-04-06 04:52:56.245331 | orchestrator | 2026-04-06 04:52:56.245342 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-06 04:52:56.245353 | orchestrator | Monday 06 April 2026 04:52:32 +0000 (0:00:00.465) 0:00:25.671 ********** 2026-04-06 04:52:56.245363 | orchestrator | 2026-04-06 04:52:56.245374 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-06 04:52:56.245385 | orchestrator | Monday 06 April 2026 04:52:33 +0000 (0:00:00.539) 0:00:26.210 ********** 2026-04-06 04:52:56.245395 | orchestrator | 2026-04-06 04:52:56.245406 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-06 04:52:56.245417 | orchestrator | Monday 06 April 2026 04:52:33 +0000 (0:00:00.845) 0:00:27.056 ********** 2026-04-06 04:52:56.245428 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:52:56.245439 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:52:56.245449 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:52:56.245460 | orchestrator | 2026-04-06 04:52:56.245471 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-06 04:52:56.245490 | orchestrator | Monday 06 April 2026 04:52:44 +0000 (0:00:10.695) 0:00:37.752 ********** 2026-04-06 04:52:56.245501 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:52:56.245512 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:52:56.245523 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:52:56.245533 | orchestrator | 2026-04-06 04:52:56.245544 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:52:56.245556 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 04:52:56.245568 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 04:52:56.245580 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 04:52:56.245590 | orchestrator | 2026-04-06 04:52:56.245601 | orchestrator | 2026-04-06 04:52:56.245612 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:52:56.245652 | orchestrator | Monday 06 April 2026 04:52:55 +0000 (0:00:11.329) 0:00:49.082 ********** 2026-04-06 04:52:56.245667 | orchestrator | =============================================================================== 2026-04-06 04:52:56.245678 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.33s 2026-04-06 04:52:56.245689 | orchestrator | redis : Restart redis container ---------------------------------------- 10.70s 2026-04-06 04:52:56.245699 | orchestrator | redis : Copying over redis config files --------------------------------- 4.12s 2026-04-06 04:52:56.245710 | orchestrator | redis : Copying over default config.json files -------------------------- 3.13s 2026-04-06 04:52:56.245721 | orchestrator | redis : include_tasks --------------------------------------------------- 3.10s 2026-04-06 04:52:56.245731 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.06s 2026-04-06 04:52:56.245742 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.73s 2026-04-06 04:52:56.245753 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.07s 2026-04-06 04:52:56.245764 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.85s 2026-04-06 04:52:56.245774 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.79s 2026-04-06 04:52:56.245785 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.76s 2026-04-06 04:52:56.245796 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.41s 2026-04-06 04:52:56.435044 | orchestrator | + osism apply -a upgrade mariadb 2026-04-06 04:52:57.693553 | orchestrator | 2026-04-06 04:52:57 | INFO  | Prepare task for execution of mariadb. 2026-04-06 04:52:57.766212 | orchestrator | 2026-04-06 04:52:57 | INFO  | Task da982fd0-1103-4f24-a88e-38cc37eaafc4 (mariadb) was prepared for execution. 2026-04-06 04:52:57.766279 | orchestrator | 2026-04-06 04:52:57 | INFO  | It takes a moment until task da982fd0-1103-4f24-a88e-38cc37eaafc4 (mariadb) has been started and output is visible here. 2026-04-06 04:53:25.934662 | orchestrator | 2026-04-06 04:53:25.934784 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 04:53:25.934801 | orchestrator | 2026-04-06 04:53:25.934813 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 04:53:25.934882 | orchestrator | Monday 06 April 2026 04:53:03 +0000 (0:00:02.589) 0:00:02.590 ********** 2026-04-06 04:53:25.934894 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:53:25.934906 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:53:25.934917 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:53:25.934928 | orchestrator | 2026-04-06 04:53:25.934939 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 04:53:25.934950 | orchestrator | Monday 06 April 2026 04:53:05 +0000 (0:00:01.738) 0:00:04.329 ********** 2026-04-06 04:53:25.934984 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-06 04:53:25.934996 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-06 04:53:25.935017 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-06 04:53:25.935028 | orchestrator | 2026-04-06 04:53:25.935039 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-06 04:53:25.935049 | orchestrator | 2026-04-06 04:53:25.935060 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-06 04:53:25.935071 | orchestrator | Monday 06 April 2026 04:53:07 +0000 (0:00:01.810) 0:00:06.139 ********** 2026-04-06 04:53:25.935082 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 04:53:25.935093 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-06 04:53:25.935103 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-06 04:53:25.935114 | orchestrator | 2026-04-06 04:53:25.935125 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-06 04:53:25.935136 | orchestrator | Monday 06 April 2026 04:53:08 +0000 (0:00:01.487) 0:00:07.626 ********** 2026-04-06 04:53:25.935148 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:53:25.935159 | orchestrator | 2026-04-06 04:53:25.935170 | orchestrator | TASK [mariadb : Remove mariadb-clustercheck] *********************************** 2026-04-06 04:53:25.935183 | orchestrator | Monday 06 April 2026 04:53:11 +0000 (0:00:02.975) 0:00:10.602 ********** 2026-04-06 04:53:25.935196 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:53:25.935208 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:53:25.935221 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:53:25.935233 | orchestrator | 2026-04-06 04:53:25.935245 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-06 04:53:25.935258 | orchestrator | Monday 06 April 2026 04:53:14 +0000 (0:00:03.059) 0:00:13.662 ********** 2026-04-06 04:53:25.935278 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 04:53:25.935324 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 04:53:25.935350 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 04:53:25.935366 | orchestrator | 2026-04-06 04:53:25.935385 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-06 04:53:25.935404 | orchestrator | Monday 06 April 2026 04:53:18 +0000 (0:00:03.792) 0:00:17.454 ********** 2026-04-06 04:53:25.935419 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:53:25.935432 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:53:25.935443 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:53:25.935454 | orchestrator | 2026-04-06 04:53:25.935465 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-06 04:53:25.935483 | orchestrator | Monday 06 April 2026 04:53:20 +0000 (0:00:01.707) 0:00:19.162 ********** 2026-04-06 04:53:25.935494 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:53:25.935504 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:53:25.935515 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:53:25.935526 | orchestrator | 2026-04-06 04:53:25.935537 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-06 04:53:25.935547 | orchestrator | Monday 06 April 2026 04:53:22 +0000 (0:00:02.312) 0:00:21.474 ********** 2026-04-06 04:53:25.935574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 04:53:38.446547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 04:53:38.446709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 04:53:38.446728 | orchestrator | 2026-04-06 04:53:38.446741 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-06 04:53:38.446754 | orchestrator | Monday 06 April 2026 04:53:27 +0000 (0:00:04.402) 0:00:25.876 ********** 2026-04-06 04:53:38.446766 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:53:38.446778 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:53:38.446790 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:53:38.446802 | orchestrator | 2026-04-06 04:53:38.446814 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-06 04:53:38.446839 | orchestrator | Monday 06 April 2026 04:53:29 +0000 (0:00:02.016) 0:00:27.893 ********** 2026-04-06 04:53:38.446851 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:53:38.446862 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:53:38.446873 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:53:38.446884 | orchestrator | 2026-04-06 04:53:38.446896 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-06 04:53:38.446968 | orchestrator | Monday 06 April 2026 04:53:34 +0000 (0:00:05.138) 0:00:33.032 ********** 2026-04-06 04:53:38.446980 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:53:38.446992 | orchestrator | 2026-04-06 04:53:38.447003 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-06 04:53:38.447014 | orchestrator | Monday 06 April 2026 04:53:36 +0000 (0:00:01.953) 0:00:34.985 ********** 2026-04-06 04:53:38.447026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:53:38.447046 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:53:38.447074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:53:45.313643 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:53:45.313753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:53:45.313794 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:53:45.313806 | orchestrator | 2026-04-06 04:53:45.313818 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-06 04:53:45.313828 | orchestrator | Monday 06 April 2026 04:53:39 +0000 (0:00:03.574) 0:00:38.560 ********** 2026-04-06 04:53:45.313854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:53:45.313866 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:53:45.313898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:53:45.313934 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:53:45.314084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:53:45.314108 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:53:45.314125 | orchestrator | 2026-04-06 04:53:45.314142 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-06 04:53:45.314159 | orchestrator | Monday 06 April 2026 04:53:43 +0000 (0:00:03.412) 0:00:41.972 ********** 2026-04-06 04:53:45.314191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:53:50.245760 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:53:50.245917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:53:50.246011 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:53:50.246103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:53:50.246139 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:53:50.246152 | orchestrator | 2026-04-06 04:53:50.246164 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-06 04:53:50.246176 | orchestrator | Monday 06 April 2026 04:53:47 +0000 (0:00:03.950) 0:00:45.923 ********** 2026-04-06 04:53:50.246219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 04:53:50.246233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 04:53:50.246264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-06 04:54:05.767747 | orchestrator | 2026-04-06 04:54:05.767915 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-06 04:54:05.767946 | orchestrator | Monday 06 April 2026 04:53:51 +0000 (0:00:04.251) 0:00:50.175 ********** 2026-04-06 04:54:05.768025 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 04:54:05.768045 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:54:05.768063 | orchestrator | } 2026-04-06 04:54:05.768081 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 04:54:05.768097 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:54:05.768112 | orchestrator | } 2026-04-06 04:54:05.768129 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 04:54:05.768148 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 04:54:05.768166 | orchestrator | } 2026-04-06 04:54:05.768185 | orchestrator | 2026-04-06 04:54:05.768204 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 04:54:05.768220 | orchestrator | Monday 06 April 2026 04:53:52 +0000 (0:00:01.449) 0:00:51.625 ********** 2026-04-06 04:54:05.768237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:54:05.768277 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:05.768325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:54:05.768342 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:05.768356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:54:05.768379 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:05.768392 | orchestrator | 2026-04-06 04:54:05.768405 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-06 04:54:05.768419 | orchestrator | Monday 06 April 2026 04:53:56 +0000 (0:00:04.019) 0:00:55.644 ********** 2026-04-06 04:54:05.768432 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:05.768445 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:05.768458 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:05.768471 | orchestrator | 2026-04-06 04:54:05.768484 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-06 04:54:05.768498 | orchestrator | Monday 06 April 2026 04:53:58 +0000 (0:00:01.572) 0:00:57.216 ********** 2026-04-06 04:54:05.768511 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:05.768524 | orchestrator | 2026-04-06 04:54:05.768537 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-06 04:54:05.768551 | orchestrator | Monday 06 April 2026 04:53:59 +0000 (0:00:01.107) 0:00:58.323 ********** 2026-04-06 04:54:05.768565 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:05.768577 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:05.768590 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:05.768603 | orchestrator | 2026-04-06 04:54:05.768616 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-06 04:54:05.768629 | orchestrator | Monday 06 April 2026 04:54:00 +0000 (0:00:01.456) 0:00:59.780 ********** 2026-04-06 04:54:05.768642 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:05.768655 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:05.768666 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:05.768677 | orchestrator | 2026-04-06 04:54:05.768687 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-06 04:54:05.768699 | orchestrator | Monday 06 April 2026 04:54:02 +0000 (0:00:01.436) 0:01:01.216 ********** 2026-04-06 04:54:05.768709 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:05.768720 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:05.768731 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:05.768742 | orchestrator | 2026-04-06 04:54:05.768753 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-06 04:54:05.768764 | orchestrator | Monday 06 April 2026 04:54:04 +0000 (0:00:01.590) 0:01:02.807 ********** 2026-04-06 04:54:05.768775 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:05.768786 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:05.768797 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:05.768814 | orchestrator | 2026-04-06 04:54:05.768825 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-06 04:54:05.768835 | orchestrator | Monday 06 April 2026 04:54:05 +0000 (0:00:01.405) 0:01:04.213 ********** 2026-04-06 04:54:05.768846 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:05.768857 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:05.768868 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:05.768879 | orchestrator | 2026-04-06 04:54:05.768898 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-06 04:54:23.492542 | orchestrator | Monday 06 April 2026 04:54:06 +0000 (0:00:01.450) 0:01:05.663 ********** 2026-04-06 04:54:23.492689 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.492718 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:23.492738 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.492757 | orchestrator | 2026-04-06 04:54:23.492776 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-06 04:54:23.492796 | orchestrator | Monday 06 April 2026 04:54:08 +0000 (0:00:01.354) 0:01:07.018 ********** 2026-04-06 04:54:23.492815 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 04:54:23.492834 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 04:54:23.492853 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 04:54:23.492873 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.492886 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 04:54:23.492896 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 04:54:23.492908 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 04:54:23.492918 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:23.492930 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 04:54:23.492941 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 04:54:23.492952 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 04:54:23.492994 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.493006 | orchestrator | 2026-04-06 04:54:23.493017 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-06 04:54:23.493030 | orchestrator | Monday 06 April 2026 04:54:09 +0000 (0:00:01.652) 0:01:08.670 ********** 2026-04-06 04:54:23.493043 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.493057 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:23.493070 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.493083 | orchestrator | 2026-04-06 04:54:23.493095 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-06 04:54:23.493108 | orchestrator | Monday 06 April 2026 04:54:11 +0000 (0:00:01.408) 0:01:10.079 ********** 2026-04-06 04:54:23.493121 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.493134 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:23.493146 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.493159 | orchestrator | 2026-04-06 04:54:23.493171 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-06 04:54:23.493185 | orchestrator | Monday 06 April 2026 04:54:12 +0000 (0:00:01.372) 0:01:11.451 ********** 2026-04-06 04:54:23.493197 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.493209 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:23.493222 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.493234 | orchestrator | 2026-04-06 04:54:23.493246 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-06 04:54:23.493259 | orchestrator | Monday 06 April 2026 04:54:14 +0000 (0:00:01.373) 0:01:12.825 ********** 2026-04-06 04:54:23.493272 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.493285 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:23.493297 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.493309 | orchestrator | 2026-04-06 04:54:23.493322 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-06 04:54:23.493357 | orchestrator | Monday 06 April 2026 04:54:15 +0000 (0:00:01.445) 0:01:14.270 ********** 2026-04-06 04:54:23.493371 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.493384 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:23.493395 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.493406 | orchestrator | 2026-04-06 04:54:23.493417 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-06 04:54:23.493428 | orchestrator | Monday 06 April 2026 04:54:16 +0000 (0:00:01.348) 0:01:15.619 ********** 2026-04-06 04:54:23.493439 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.493450 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:23.493461 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.493472 | orchestrator | 2026-04-06 04:54:23.493483 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-06 04:54:23.493494 | orchestrator | Monday 06 April 2026 04:54:18 +0000 (0:00:01.412) 0:01:17.032 ********** 2026-04-06 04:54:23.493505 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.493516 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:23.493527 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.493537 | orchestrator | 2026-04-06 04:54:23.493548 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-06 04:54:23.493559 | orchestrator | Monday 06 April 2026 04:54:19 +0000 (0:00:01.520) 0:01:18.552 ********** 2026-04-06 04:54:23.493570 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.493580 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:23.493591 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.493602 | orchestrator | 2026-04-06 04:54:23.493613 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-06 04:54:23.493624 | orchestrator | Monday 06 April 2026 04:54:21 +0000 (0:00:01.388) 0:01:19.940 ********** 2026-04-06 04:54:23.493677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:54:23.493694 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:23.493708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:54:23.493727 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:23.493753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:54:41.559821 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:41.559939 | orchestrator | 2026-04-06 04:54:41.559955 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-06 04:54:41.560037 | orchestrator | Monday 06 April 2026 04:54:24 +0000 (0:00:03.435) 0:01:23.376 ********** 2026-04-06 04:54:41.560052 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:41.560064 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:41.560075 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:41.560112 | orchestrator | 2026-04-06 04:54:41.560124 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-06 04:54:41.560135 | orchestrator | Monday 06 April 2026 04:54:25 +0000 (0:00:01.353) 0:01:24.729 ********** 2026-04-06 04:54:41.560150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:54:41.560166 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:41.560214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:54:41.560237 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:41.560250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-06 04:54:41.560261 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:41.560272 | orchestrator | 2026-04-06 04:54:41.560283 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-06 04:54:41.560295 | orchestrator | Monday 06 April 2026 04:54:29 +0000 (0:00:03.465) 0:01:28.195 ********** 2026-04-06 04:54:41.560305 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:41.560316 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:41.560327 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:41.560337 | orchestrator | 2026-04-06 04:54:41.560349 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-06 04:54:41.560362 | orchestrator | Monday 06 April 2026 04:54:31 +0000 (0:00:01.685) 0:01:29.880 ********** 2026-04-06 04:54:41.560375 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:41.560387 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:41.560400 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:41.560413 | orchestrator | 2026-04-06 04:54:41.560425 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-06 04:54:41.560439 | orchestrator | Monday 06 April 2026 04:54:32 +0000 (0:00:01.463) 0:01:31.344 ********** 2026-04-06 04:54:41.560453 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:41.560466 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:41.560478 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:41.560491 | orchestrator | 2026-04-06 04:54:41.560503 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-06 04:54:41.560516 | orchestrator | Monday 06 April 2026 04:54:34 +0000 (0:00:01.512) 0:01:32.856 ********** 2026-04-06 04:54:41.560528 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:41.560541 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:41.560553 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:41.560566 | orchestrator | 2026-04-06 04:54:41.560579 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-06 04:54:41.560591 | orchestrator | Monday 06 April 2026 04:54:35 +0000 (0:00:01.829) 0:01:34.686 ********** 2026-04-06 04:54:41.560604 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:54:41.560622 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:54:41.560635 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:54:41.560648 | orchestrator | 2026-04-06 04:54:41.560676 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-06 04:54:41.560699 | orchestrator | Monday 06 April 2026 04:54:37 +0000 (0:00:02.053) 0:01:36.740 ********** 2026-04-06 04:54:41.560711 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:54:41.560723 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:54:41.560734 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:54:41.560744 | orchestrator | 2026-04-06 04:54:41.560755 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-06 04:54:41.560766 | orchestrator | Monday 06 April 2026 04:54:40 +0000 (0:00:02.110) 0:01:38.850 ********** 2026-04-06 04:54:41.560777 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:54:41.560787 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:54:41.560798 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:54:41.560809 | orchestrator | 2026-04-06 04:54:41.560819 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-06 04:54:41.560830 | orchestrator | Monday 06 April 2026 04:54:41 +0000 (0:00:01.378) 0:01:40.229 ********** 2026-04-06 04:54:41.560848 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.059765 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:57:19.059876 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:57:19.059883 | orchestrator | 2026-04-06 04:57:19.059888 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-06 04:57:19.059894 | orchestrator | Monday 06 April 2026 04:54:42 +0000 (0:00:01.464) 0:01:41.693 ********** 2026-04-06 04:57:19.059898 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.059902 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:57:19.059906 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:57:19.059910 | orchestrator | 2026-04-06 04:57:19.059914 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-06 04:57:19.059918 | orchestrator | Monday 06 April 2026 04:54:44 +0000 (0:00:01.796) 0:01:43.490 ********** 2026-04-06 04:57:19.059922 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.059926 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:57:19.059930 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:57:19.059934 | orchestrator | 2026-04-06 04:57:19.059938 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-06 04:57:19.059941 | orchestrator | Monday 06 April 2026 04:54:46 +0000 (0:00:01.625) 0:01:45.115 ********** 2026-04-06 04:57:19.059945 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:57:19.059950 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.059954 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.059958 | orchestrator | 2026-04-06 04:57:19.059962 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-06 04:57:19.059966 | orchestrator | Monday 06 April 2026 04:54:47 +0000 (0:00:01.393) 0:01:46.508 ********** 2026-04-06 04:57:19.059970 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:57:19.059973 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.059977 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:57:19.059981 | orchestrator | 2026-04-06 04:57:19.059985 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-06 04:57:19.059989 | orchestrator | Monday 06 April 2026 04:54:51 +0000 (0:00:03.373) 0:01:49.882 ********** 2026-04-06 04:57:19.059993 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.059996 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:57:19.060000 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:57:19.060004 | orchestrator | 2026-04-06 04:57:19.060008 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-06 04:57:19.060012 | orchestrator | Monday 06 April 2026 04:54:52 +0000 (0:00:01.411) 0:01:51.293 ********** 2026-04-06 04:57:19.060015 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.060019 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:57:19.060023 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:57:19.060027 | orchestrator | 2026-04-06 04:57:19.060050 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-06 04:57:19.060085 | orchestrator | Monday 06 April 2026 04:54:54 +0000 (0:00:01.691) 0:01:52.984 ********** 2026-04-06 04:57:19.060090 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:57:19.060094 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.060097 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.060101 | orchestrator | 2026-04-06 04:57:19.060105 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-06 04:57:19.060109 | orchestrator | Monday 06 April 2026 04:54:55 +0000 (0:00:01.775) 0:01:54.760 ********** 2026-04-06 04:57:19.060113 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:57:19.060117 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.060120 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.060124 | orchestrator | 2026-04-06 04:57:19.060128 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-06 04:57:19.060132 | orchestrator | Monday 06 April 2026 04:54:57 +0000 (0:00:01.347) 0:01:56.107 ********** 2026-04-06 04:57:19.060136 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:57:19.060140 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.060143 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.060147 | orchestrator | 2026-04-06 04:57:19.060151 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-06 04:57:19.060155 | orchestrator | Monday 06 April 2026 04:54:59 +0000 (0:00:01.732) 0:01:57.839 ********** 2026-04-06 04:57:19.060159 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:57:19.060162 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:57:19.060166 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:57:19.060170 | orchestrator | 2026-04-06 04:57:19.060174 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-06 04:57:19.060177 | orchestrator | Monday 06 April 2026 04:55:00 +0000 (0:00:01.460) 0:01:59.300 ********** 2026-04-06 04:57:19.060181 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:57:19.060185 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.060189 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.060192 | orchestrator | 2026-04-06 04:57:19.060196 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-06 04:57:19.060200 | orchestrator | 2026-04-06 04:57:19.060204 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-06 04:57:19.060210 | orchestrator | Monday 06 April 2026 04:55:02 +0000 (0:00:01.965) 0:02:01.265 ********** 2026-04-06 04:57:19.060214 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:57:19.060217 | orchestrator | 2026-04-06 04:57:19.060221 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-06 04:57:19.060225 | orchestrator | Monday 06 April 2026 04:55:29 +0000 (0:00:26.818) 0:02:28.084 ********** 2026-04-06 04:57:19.060229 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.060233 | orchestrator | 2026-04-06 04:57:19.060237 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-06 04:57:19.060240 | orchestrator | Monday 06 April 2026 04:55:34 +0000 (0:00:04.812) 0:02:32.897 ********** 2026-04-06 04:57:19.060244 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.060248 | orchestrator | 2026-04-06 04:57:19.060252 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-06 04:57:19.060256 | orchestrator | 2026-04-06 04:57:19.060259 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-06 04:57:19.060263 | orchestrator | Monday 06 April 2026 04:55:37 +0000 (0:00:02.994) 0:02:35.892 ********** 2026-04-06 04:57:19.060267 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:57:19.060271 | orchestrator | 2026-04-06 04:57:19.060275 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-06 04:57:19.060289 | orchestrator | Monday 06 April 2026 04:56:03 +0000 (0:00:26.536) 0:03:02.428 ********** 2026-04-06 04:57:19.060293 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-04-06 04:57:19.060302 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:57:19.060306 | orchestrator | 2026-04-06 04:57:19.060310 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-06 04:57:19.060314 | orchestrator | Monday 06 April 2026 04:56:11 +0000 (0:00:08.008) 0:03:10.437 ********** 2026-04-06 04:57:19.060318 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:57:19.060322 | orchestrator | 2026-04-06 04:57:19.060326 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-06 04:57:19.060329 | orchestrator | 2026-04-06 04:57:19.060333 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-06 04:57:19.060338 | orchestrator | Monday 06 April 2026 04:56:14 +0000 (0:00:02.936) 0:03:13.373 ********** 2026-04-06 04:57:19.060342 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:57:19.060347 | orchestrator | 2026-04-06 04:57:19.060351 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-06 04:57:19.060356 | orchestrator | Monday 06 April 2026 04:56:40 +0000 (0:00:25.531) 0:03:38.905 ********** 2026-04-06 04:57:19.060360 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:57:19.060404 | orchestrator | 2026-04-06 04:57:19.060409 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-06 04:57:19.060414 | orchestrator | Monday 06 April 2026 04:56:44 +0000 (0:00:04.640) 0:03:43.545 ********** 2026-04-06 04:57:19.060418 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:57:19.060423 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-06 04:57:19.060428 | orchestrator | 2026-04-06 04:57:19.060432 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-06 04:57:19.060437 | orchestrator | skipping: no hosts matched 2026-04-06 04:57:19.060442 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-06 04:57:19.060446 | orchestrator | mariadb_bootstrap_restart 2026-04-06 04:57:19.060451 | orchestrator | 2026-04-06 04:57:19.060455 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-06 04:57:19.060459 | orchestrator | skipping: no hosts matched 2026-04-06 04:57:19.060462 | orchestrator | 2026-04-06 04:57:19.060466 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-06 04:57:19.060470 | orchestrator | 2026-04-06 04:57:19.060474 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-06 04:57:19.060478 | orchestrator | Monday 06 April 2026 04:56:48 +0000 (0:00:04.046) 0:03:47.592 ********** 2026-04-06 04:57:19.060482 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:57:19.060485 | orchestrator | 2026-04-06 04:57:19.060489 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-06 04:57:19.060493 | orchestrator | Monday 06 April 2026 04:56:50 +0000 (0:00:01.711) 0:03:49.303 ********** 2026-04-06 04:57:19.060497 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.060501 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.060504 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.060508 | orchestrator | 2026-04-06 04:57:19.060512 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-06 04:57:19.060516 | orchestrator | Monday 06 April 2026 04:56:53 +0000 (0:00:03.263) 0:03:52.567 ********** 2026-04-06 04:57:19.060520 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.060524 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.060527 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:57:19.060531 | orchestrator | 2026-04-06 04:57:19.060535 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-06 04:57:19.060539 | orchestrator | Monday 06 April 2026 04:56:57 +0000 (0:00:03.365) 0:03:55.933 ********** 2026-04-06 04:57:19.060543 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.060547 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.060550 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.060554 | orchestrator | 2026-04-06 04:57:19.060558 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-06 04:57:19.060566 | orchestrator | Monday 06 April 2026 04:57:00 +0000 (0:00:03.370) 0:03:59.304 ********** 2026-04-06 04:57:19.060570 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.060573 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.060577 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:57:19.060581 | orchestrator | 2026-04-06 04:57:19.060585 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-06 04:57:19.060589 | orchestrator | Monday 06 April 2026 04:57:03 +0000 (0:00:03.276) 0:04:02.580 ********** 2026-04-06 04:57:19.060592 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:57:19.060596 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.060600 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:57:19.060604 | orchestrator | 2026-04-06 04:57:19.060610 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-06 04:57:19.060614 | orchestrator | Monday 06 April 2026 04:57:10 +0000 (0:00:06.722) 0:04:09.303 ********** 2026-04-06 04:57:19.060618 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:57:19.060622 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.060626 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.060630 | orchestrator | 2026-04-06 04:57:19.060633 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-06 04:57:19.060637 | orchestrator | Monday 06 April 2026 04:57:13 +0000 (0:00:03.442) 0:04:12.746 ********** 2026-04-06 04:57:19.060641 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:57:19.060645 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:57:19.060649 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:57:19.060653 | orchestrator | 2026-04-06 04:57:19.060656 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-06 04:57:19.060660 | orchestrator | Monday 06 April 2026 04:57:15 +0000 (0:00:01.557) 0:04:14.303 ********** 2026-04-06 04:57:19.060664 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:57:19.060668 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:57:19.060672 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:57:19.060676 | orchestrator | 2026-04-06 04:57:19.060679 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-06 04:57:19.060686 | orchestrator | Monday 06 April 2026 04:57:19 +0000 (0:00:03.537) 0:04:17.841 ********** 2026-04-06 04:57:39.430797 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:57:39.430900 | orchestrator | 2026-04-06 04:57:39.430913 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-04-06 04:57:39.430922 | orchestrator | Monday 06 April 2026 04:57:20 +0000 (0:00:01.919) 0:04:19.760 ********** 2026-04-06 04:57:39.430931 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:57:39.430940 | orchestrator | changed: [testbed-node-2] 2026-04-06 04:57:39.430948 | orchestrator | changed: [testbed-node-1] 2026-04-06 04:57:39.430956 | orchestrator | 2026-04-06 04:57:39.430965 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 04:57:39.430974 | orchestrator | testbed-node-0 : ok=35  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-06 04:57:39.430984 | orchestrator | testbed-node-1 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-06 04:57:39.430992 | orchestrator | testbed-node-2 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-06 04:57:39.431000 | orchestrator | 2026-04-06 04:57:39.431008 | orchestrator | 2026-04-06 04:57:39.431016 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 04:57:39.431024 | orchestrator | Monday 06 April 2026 04:57:39 +0000 (0:00:18.051) 0:04:37.812 ********** 2026-04-06 04:57:39.431031 | orchestrator | =============================================================================== 2026-04-06 04:57:39.431039 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 78.89s 2026-04-06 04:57:39.431071 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 18.05s 2026-04-06 04:57:39.431079 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 17.46s 2026-04-06 04:57:39.431087 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 9.98s 2026-04-06 04:57:39.431095 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.72s 2026-04-06 04:57:39.431103 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.14s 2026-04-06 04:57:39.431111 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.40s 2026-04-06 04:57:39.431119 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.25s 2026-04-06 04:57:39.431127 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.02s 2026-04-06 04:57:39.431135 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.95s 2026-04-06 04:57:39.431143 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.79s 2026-04-06 04:57:39.431150 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.57s 2026-04-06 04:57:39.431158 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.54s 2026-04-06 04:57:39.431166 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.46s 2026-04-06 04:57:39.431174 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.44s 2026-04-06 04:57:39.431182 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.44s 2026-04-06 04:57:39.431190 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.41s 2026-04-06 04:57:39.431198 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.37s 2026-04-06 04:57:39.431206 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 3.37s 2026-04-06 04:57:39.431214 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.37s 2026-04-06 04:57:39.635803 | orchestrator | + osism apply -a upgrade rabbitmq 2026-04-06 04:57:40.939886 | orchestrator | 2026-04-06 04:57:40 | INFO  | Prepare task for execution of rabbitmq. 2026-04-06 04:57:41.013216 | orchestrator | 2026-04-06 04:57:41 | INFO  | Task e6be0449-4654-492c-a1e1-c7a0ae26dbb9 (rabbitmq) was prepared for execution. 2026-04-06 04:57:41.013295 | orchestrator | 2026-04-06 04:57:41 | INFO  | It takes a moment until task e6be0449-4654-492c-a1e1-c7a0ae26dbb9 (rabbitmq) has been started and output is visible here. 2026-04-06 04:58:24.585453 | orchestrator | 2026-04-06 04:58:24.585557 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 04:58:24.585605 | orchestrator | 2026-04-06 04:58:24.585616 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 04:58:24.585626 | orchestrator | Monday 06 April 2026 04:57:45 +0000 (0:00:01.382) 0:00:01.382 ********** 2026-04-06 04:58:24.585634 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:58:24.585642 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:58:24.585651 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:58:24.585659 | orchestrator | 2026-04-06 04:58:24.585666 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 04:58:24.585674 | orchestrator | Monday 06 April 2026 04:57:47 +0000 (0:00:01.853) 0:00:03.236 ********** 2026-04-06 04:58:24.585683 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-06 04:58:24.585691 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-06 04:58:24.585699 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-06 04:58:24.585708 | orchestrator | 2026-04-06 04:58:24.585716 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-06 04:58:24.585725 | orchestrator | 2026-04-06 04:58:24.585734 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-06 04:58:24.585770 | orchestrator | Monday 06 April 2026 04:57:52 +0000 (0:00:04.674) 0:00:07.911 ********** 2026-04-06 04:58:24.585780 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:58:24.585789 | orchestrator | 2026-04-06 04:58:24.585796 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-06 04:58:24.585804 | orchestrator | Monday 06 April 2026 04:57:54 +0000 (0:00:01.873) 0:00:09.784 ********** 2026-04-06 04:58:24.585811 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:58:24.585819 | orchestrator | 2026-04-06 04:58:24.585826 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-06 04:58:24.585833 | orchestrator | Monday 06 April 2026 04:57:56 +0000 (0:00:02.675) 0:00:12.460 ********** 2026-04-06 04:58:24.585841 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:58:24.585848 | orchestrator | 2026-04-06 04:58:24.585856 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-06 04:58:24.585864 | orchestrator | Monday 06 April 2026 04:57:59 +0000 (0:00:03.050) 0:00:15.510 ********** 2026-04-06 04:58:24.585872 | orchestrator | changed: [testbed-node-0] 2026-04-06 04:58:24.585881 | orchestrator | 2026-04-06 04:58:24.585889 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-06 04:58:24.585897 | orchestrator | Monday 06 April 2026 04:58:09 +0000 (0:00:09.404) 0:00:24.915 ********** 2026-04-06 04:58:24.585905 | orchestrator | ok: [testbed-node-0] => { 2026-04-06 04:58:24.585914 | orchestrator |  "changed": false, 2026-04-06 04:58:24.585922 | orchestrator |  "msg": "All assertions passed" 2026-04-06 04:58:24.585930 | orchestrator | } 2026-04-06 04:58:24.585938 | orchestrator | 2026-04-06 04:58:24.585946 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-06 04:58:24.585953 | orchestrator | Monday 06 April 2026 04:58:10 +0000 (0:00:01.341) 0:00:26.256 ********** 2026-04-06 04:58:24.585960 | orchestrator | ok: [testbed-node-0] => { 2026-04-06 04:58:24.585968 | orchestrator |  "changed": false, 2026-04-06 04:58:24.585976 | orchestrator |  "msg": "All assertions passed" 2026-04-06 04:58:24.585984 | orchestrator | } 2026-04-06 04:58:24.585993 | orchestrator | 2026-04-06 04:58:24.586002 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-06 04:58:24.586011 | orchestrator | Monday 06 April 2026 04:58:12 +0000 (0:00:01.655) 0:00:27.912 ********** 2026-04-06 04:58:24.586072 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:58:24.586080 | orchestrator | 2026-04-06 04:58:24.586089 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-06 04:58:24.586098 | orchestrator | Monday 06 April 2026 04:58:14 +0000 (0:00:01.869) 0:00:29.782 ********** 2026-04-06 04:58:24.586106 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:58:24.586115 | orchestrator | 2026-04-06 04:58:24.586123 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-06 04:58:24.586131 | orchestrator | Monday 06 April 2026 04:58:16 +0000 (0:00:02.338) 0:00:32.120 ********** 2026-04-06 04:58:24.586139 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:58:24.586148 | orchestrator | 2026-04-06 04:58:24.586157 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-06 04:58:24.586166 | orchestrator | Monday 06 April 2026 04:58:19 +0000 (0:00:03.007) 0:00:35.128 ********** 2026-04-06 04:58:24.586175 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:58:24.586183 | orchestrator | 2026-04-06 04:58:24.586190 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-06 04:58:24.586199 | orchestrator | Monday 06 April 2026 04:58:21 +0000 (0:00:01.646) 0:00:36.775 ********** 2026-04-06 04:58:24.586275 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:58:24.586301 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:58:24.586312 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:58:24.586321 | orchestrator | 2026-04-06 04:58:24.586329 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-06 04:58:24.586338 | orchestrator | Monday 06 April 2026 04:58:23 +0000 (0:00:02.081) 0:00:38.856 ********** 2026-04-06 04:58:24.586346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:58:24.586374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:58:46.222115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:58:46.222266 | orchestrator | 2026-04-06 04:58:46.222296 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-06 04:58:46.222319 | orchestrator | Monday 06 April 2026 04:58:26 +0000 (0:00:03.277) 0:00:42.133 ********** 2026-04-06 04:58:46.222337 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-06 04:58:46.222355 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-06 04:58:46.222373 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-06 04:58:46.222391 | orchestrator | 2026-04-06 04:58:46.222409 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-06 04:58:46.222426 | orchestrator | Monday 06 April 2026 04:58:29 +0000 (0:00:02.493) 0:00:44.627 ********** 2026-04-06 04:58:46.222443 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-06 04:58:46.222461 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-06 04:58:46.222481 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-06 04:58:46.222498 | orchestrator | 2026-04-06 04:58:46.222519 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-06 04:58:46.222598 | orchestrator | Monday 06 April 2026 04:58:31 +0000 (0:00:02.875) 0:00:47.502 ********** 2026-04-06 04:58:46.222620 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-06 04:58:46.222678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-06 04:58:46.222699 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-06 04:58:46.222718 | orchestrator | 2026-04-06 04:58:46.222738 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-06 04:58:46.222757 | orchestrator | Monday 06 April 2026 04:58:34 +0000 (0:00:02.349) 0:00:49.852 ********** 2026-04-06 04:58:46.222777 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-06 04:58:46.222797 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-06 04:58:46.222817 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-06 04:58:46.222836 | orchestrator | 2026-04-06 04:58:46.222855 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-06 04:58:46.222874 | orchestrator | Monday 06 April 2026 04:58:37 +0000 (0:00:02.835) 0:00:52.687 ********** 2026-04-06 04:58:46.222893 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-06 04:58:46.222912 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-06 04:58:46.222930 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-06 04:58:46.222948 | orchestrator | 2026-04-06 04:58:46.222967 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-06 04:58:46.223004 | orchestrator | Monday 06 April 2026 04:58:39 +0000 (0:00:02.425) 0:00:55.113 ********** 2026-04-06 04:58:46.223022 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-06 04:58:46.223041 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-06 04:58:46.223059 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-06 04:58:46.223076 | orchestrator | 2026-04-06 04:58:46.223094 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-06 04:58:46.223112 | orchestrator | Monday 06 April 2026 04:58:41 +0000 (0:00:02.340) 0:00:57.454 ********** 2026-04-06 04:58:46.223129 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 04:58:46.223147 | orchestrator | 2026-04-06 04:58:46.223191 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-06 04:58:46.223211 | orchestrator | Monday 06 April 2026 04:58:43 +0000 (0:00:01.829) 0:00:59.283 ********** 2026-04-06 04:58:46.223233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:58:46.223257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:58:46.223299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:58:46.223319 | orchestrator | 2026-04-06 04:58:46.223338 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-06 04:58:46.223365 | orchestrator | Monday 06 April 2026 04:58:46 +0000 (0:00:02.430) 0:01:01.714 ********** 2026-04-06 04:58:46.223398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 04:58:54.635556 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:58:54.635660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 04:58:54.635738 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:58:54.635751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 04:58:54.635759 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:58:54.635767 | orchestrator | 2026-04-06 04:58:54.635775 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-06 04:58:54.635784 | orchestrator | Monday 06 April 2026 04:58:47 +0000 (0:00:01.554) 0:01:03.268 ********** 2026-04-06 04:58:54.635805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 04:58:54.635813 | orchestrator | skipping: [testbed-node-0] 2026-04-06 04:58:54.635837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 04:58:54.635853 | orchestrator | skipping: [testbed-node-1] 2026-04-06 04:58:54.635860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 04:58:54.635868 | orchestrator | skipping: [testbed-node-2] 2026-04-06 04:58:54.635875 | orchestrator | 2026-04-06 04:58:54.635882 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-06 04:58:54.635889 | orchestrator | Monday 06 April 2026 04:58:49 +0000 (0:00:01.913) 0:01:05.182 ********** 2026-04-06 04:58:54.635896 | orchestrator | ok: [testbed-node-2] 2026-04-06 04:58:54.635904 | orchestrator | ok: [testbed-node-0] 2026-04-06 04:58:54.635911 | orchestrator | ok: [testbed-node-1] 2026-04-06 04:58:54.635918 | orchestrator | 2026-04-06 04:58:54.635925 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-06 04:58:54.635932 | orchestrator | Monday 06 April 2026 04:58:53 +0000 (0:00:03.993) 0:01:09.175 ********** 2026-04-06 04:58:54.635943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 04:58:54.635956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 05:00:38.817029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-06 05:00:38.817181 | orchestrator | 2026-04-06 05:00:38.817214 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-06 05:00:38.817237 | orchestrator | Monday 06 April 2026 04:58:55 +0000 (0:00:02.305) 0:01:11.480 ********** 2026-04-06 05:00:38.817257 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 05:00:38.817278 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:00:38.817298 | orchestrator | } 2026-04-06 05:00:38.817318 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 05:00:38.817336 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:00:38.817426 | orchestrator | } 2026-04-06 05:00:38.817445 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 05:00:38.817462 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:00:38.817479 | orchestrator | } 2026-04-06 05:00:38.817499 | orchestrator | 2026-04-06 05:00:38.817517 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 05:00:38.817536 | orchestrator | Monday 06 April 2026 04:58:57 +0000 (0:00:01.668) 0:01:13.149 ********** 2026-04-06 05:00:38.817583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 05:00:38.817611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 05:00:38.817666 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:00:38.817688 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:00:38.817740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-06 05:00:38.817762 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:00:38.817781 | orchestrator | 2026-04-06 05:00:38.817801 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-06 05:00:38.817820 | orchestrator | Monday 06 April 2026 04:58:59 +0000 (0:00:02.017) 0:01:15.166 ********** 2026-04-06 05:00:38.817838 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:00:38.817856 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:00:38.817874 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:00:38.817893 | orchestrator | 2026-04-06 05:00:38.817912 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-06 05:00:38.817930 | orchestrator | 2026-04-06 05:00:38.817948 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-06 05:00:38.817966 | orchestrator | Monday 06 April 2026 04:59:01 +0000 (0:00:01.796) 0:01:16.963 ********** 2026-04-06 05:00:38.817984 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:00:38.818003 | orchestrator | 2026-04-06 05:00:38.818178 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-06 05:00:38.818203 | orchestrator | Monday 06 April 2026 04:59:03 +0000 (0:00:02.182) 0:01:19.145 ********** 2026-04-06 05:00:38.818221 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:00:38.818241 | orchestrator | 2026-04-06 05:00:38.818257 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-06 05:00:38.818276 | orchestrator | Monday 06 April 2026 04:59:13 +0000 (0:00:09.535) 0:01:28.681 ********** 2026-04-06 05:00:38.818294 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:00:38.818312 | orchestrator | 2026-04-06 05:00:38.818329 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-06 05:00:38.818379 | orchestrator | Monday 06 April 2026 04:59:22 +0000 (0:00:09.364) 0:01:38.046 ********** 2026-04-06 05:00:38.818398 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:00:38.818415 | orchestrator | 2026-04-06 05:00:38.818431 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-06 05:00:38.818449 | orchestrator | 2026-04-06 05:00:38.818466 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-06 05:00:38.818485 | orchestrator | Monday 06 April 2026 04:59:31 +0000 (0:00:08.642) 0:01:46.689 ********** 2026-04-06 05:00:38.818502 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:00:38.818520 | orchestrator | 2026-04-06 05:00:38.818538 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-06 05:00:38.818577 | orchestrator | Monday 06 April 2026 04:59:32 +0000 (0:00:01.718) 0:01:48.407 ********** 2026-04-06 05:00:38.818595 | orchestrator | changed: [testbed-node-1][0m 2026-04-06 05:00:38.818611 | orchestrator | 2026-04-06 05:00:38.818627 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-06 05:00:38.818644 | orchestrator | Monday 06 April 2026 04:59:41 +0000 (0:00:09.009) 0:01:57.416 ********** 2026-04-06 05:00:38.818673 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:00:38.818691 | orchestrator | 2026-04-06 05:00:38.818708 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-06 05:00:38.818727 | orchestrator | Monday 06 April 2026 04:59:55 +0000 (0:00:14.095) 0:02:11.512 ********** 2026-04-06 05:00:38.818745 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:00:38.818762 | orchestrator | 2026-04-06 05:00:38.818779 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-06 05:00:38.818796 | orchestrator | 2026-04-06 05:00:38.818814 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-06 05:00:38.818831 | orchestrator | Monday 06 April 2026 05:00:05 +0000 (0:00:09.666) 0:02:21.179 ********** 2026-04-06 05:00:38.818849 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:00:38.818866 | orchestrator | 2026-04-06 05:00:38.818883 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-06 05:00:38.818900 | orchestrator | Monday 06 April 2026 05:00:07 +0000 (0:00:01.693) 0:02:22.873 ********** 2026-04-06 05:00:38.818919 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:00:38.818935 | orchestrator | 2026-04-06 05:00:38.818953 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-06 05:00:38.818971 | orchestrator | Monday 06 April 2026 05:00:16 +0000 (0:00:08.922) 0:02:31.795 ********** 2026-04-06 05:00:38.818988 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:00:38.819006 | orchestrator | 2026-04-06 05:00:38.819024 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-06 05:00:38.819042 | orchestrator | Monday 06 April 2026 05:00:29 +0000 (0:00:13.753) 0:02:45.549 ********** 2026-04-06 05:00:38.819059 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:00:38.819078 | orchestrator | 2026-04-06 05:00:38.819096 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-06 05:00:38.819113 | orchestrator | 2026-04-06 05:00:38.819130 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-06 05:00:38.819170 | orchestrator | Monday 06 April 2026 05:00:38 +0000 (0:00:08.865) 0:02:54.414 ********** 2026-04-06 05:00:45.498947 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:00:45.499057 | orchestrator | 2026-04-06 05:00:45.499074 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-06 05:00:45.499088 | orchestrator | Monday 06 April 2026 05:00:40 +0000 (0:00:01.540) 0:02:55.955 ********** 2026-04-06 05:00:45.499101 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:00:45.499114 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:00:45.499126 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:00:45.499137 | orchestrator | 2026-04-06 05:00:45.499149 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 05:00:45.499162 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 05:00:45.499177 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 05:00:45.499188 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 05:00:45.499200 | orchestrator | 2026-04-06 05:00:45.499211 | orchestrator | 2026-04-06 05:00:45.499222 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 05:00:45.499257 | orchestrator | Monday 06 April 2026 05:00:45 +0000 (0:00:04.746) 0:03:00.701 ********** 2026-04-06 05:00:45.499269 | orchestrator | =============================================================================== 2026-04-06 05:00:45.499281 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.21s 2026-04-06 05:00:45.499291 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 27.47s 2026-04-06 05:00:45.499303 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 27.17s 2026-04-06 05:00:45.499315 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.40s 2026-04-06 05:00:45.499326 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.60s 2026-04-06 05:00:45.499385 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.75s 2026-04-06 05:00:45.499398 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.67s 2026-04-06 05:00:45.499409 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.99s 2026-04-06 05:00:45.499421 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.28s 2026-04-06 05:00:45.499432 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.05s 2026-04-06 05:00:45.499443 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.01s 2026-04-06 05:00:45.499455 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.88s 2026-04-06 05:00:45.499467 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.84s 2026-04-06 05:00:45.499479 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.68s 2026-04-06 05:00:45.499491 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.49s 2026-04-06 05:00:45.499502 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.43s 2026-04-06 05:00:45.499514 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.43s 2026-04-06 05:00:45.499526 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.35s 2026-04-06 05:00:45.499538 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.34s 2026-04-06 05:00:45.499549 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.34s 2026-04-06 05:00:45.682261 | orchestrator | + osism apply -a upgrade openvswitch 2026-04-06 05:00:46.930084 | orchestrator | 2026-04-06 05:00:46 | INFO  | Prepare task for execution of openvswitch. 2026-04-06 05:00:47.005553 | orchestrator | 2026-04-06 05:00:47 | INFO  | Task b7517a75-c200-4b8a-9b7f-0298d0f771f2 (openvswitch) was prepared for execution. 2026-04-06 05:00:47.005647 | orchestrator | 2026-04-06 05:00:47 | INFO  | It takes a moment until task b7517a75-c200-4b8a-9b7f-0298d0f771f2 (openvswitch) has been started and output is visible here. 2026-04-06 05:01:12.266716 | orchestrator | 2026-04-06 05:01:12.266835 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 05:01:12.266853 | orchestrator | 2026-04-06 05:01:12.266865 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 05:01:12.266877 | orchestrator | Monday 06 April 2026 05:00:52 +0000 (0:00:01.866) 0:00:01.866 ********** 2026-04-06 05:01:12.266888 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:01:12.266901 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:01:12.266912 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:01:12.266922 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:01:12.266933 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:01:12.266944 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:01:12.266955 | orchestrator | 2026-04-06 05:01:12.266967 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 05:01:12.266978 | orchestrator | Monday 06 April 2026 05:00:54 +0000 (0:00:02.531) 0:00:04.397 ********** 2026-04-06 05:01:12.266989 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 05:01:12.267024 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 05:01:12.267036 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 05:01:12.267047 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 05:01:12.267058 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 05:01:12.267069 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-06 05:01:12.267079 | orchestrator | 2026-04-06 05:01:12.267090 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-06 05:01:12.267101 | orchestrator | 2026-04-06 05:01:12.267112 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-06 05:01:12.267123 | orchestrator | Monday 06 April 2026 05:00:57 +0000 (0:00:02.325) 0:00:06.723 ********** 2026-04-06 05:01:12.267134 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:01:12.267147 | orchestrator | 2026-04-06 05:01:12.267158 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-06 05:01:12.267169 | orchestrator | Monday 06 April 2026 05:01:01 +0000 (0:00:04.073) 0:00:10.796 ********** 2026-04-06 05:01:12.267179 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-06 05:01:12.267191 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-06 05:01:12.267202 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-06 05:01:12.267213 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-06 05:01:12.267223 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-06 05:01:12.267234 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-06 05:01:12.267245 | orchestrator | 2026-04-06 05:01:12.267258 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-06 05:01:12.267271 | orchestrator | Monday 06 April 2026 05:01:03 +0000 (0:00:02.746) 0:00:13.543 ********** 2026-04-06 05:01:12.267283 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-06 05:01:12.267296 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-06 05:01:12.267340 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-06 05:01:12.267354 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-06 05:01:12.267367 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-06 05:01:12.267380 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-06 05:01:12.267392 | orchestrator | 2026-04-06 05:01:12.267406 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-06 05:01:12.267418 | orchestrator | Monday 06 April 2026 05:01:06 +0000 (0:00:02.747) 0:00:16.291 ********** 2026-04-06 05:01:12.267432 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-06 05:01:12.267445 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:01:12.267459 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-06 05:01:12.267473 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:01:12.267486 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-06 05:01:12.267499 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:01:12.267511 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-06 05:01:12.267524 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:01:12.267536 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-06 05:01:12.267548 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:01:12.267561 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-06 05:01:12.267574 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:01:12.267587 | orchestrator | 2026-04-06 05:01:12.267599 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-06 05:01:12.267612 | orchestrator | Monday 06 April 2026 05:01:09 +0000 (0:00:02.391) 0:00:18.683 ********** 2026-04-06 05:01:12.267635 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:01:12.267647 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:01:12.267658 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:01:12.267669 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:01:12.267680 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:01:12.267705 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:01:12.267717 | orchestrator | 2026-04-06 05:01:12.267728 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-06 05:01:12.267739 | orchestrator | Monday 06 April 2026 05:01:11 +0000 (0:00:02.234) 0:00:20.917 ********** 2026-04-06 05:01:12.267771 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:12.267790 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:12.267803 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:12.267814 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:12.267826 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:12.267852 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:12.267871 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:15.632768 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:15.632874 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:15.632891 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:15.632926 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:15.632953 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:15.632966 | orchestrator | 2026-04-06 05:01:15.632979 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-06 05:01:15.632992 | orchestrator | Monday 06 April 2026 05:01:13 +0000 (0:00:02.502) 0:00:23.420 ********** 2026-04-06 05:01:15.633021 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:15.633035 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:15.633047 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:15.633058 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:15.633083 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:15.633095 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:15.633116 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214205 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214360 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214403 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214430 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214443 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214455 | orchestrator | 2026-04-06 05:01:22.214469 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-06 05:01:22.214481 | orchestrator | Monday 06 April 2026 05:01:18 +0000 (0:00:04.618) 0:00:28.038 ********** 2026-04-06 05:01:22.214493 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:01:22.214506 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:01:22.214517 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:01:22.214528 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:01:22.214539 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:01:22.214549 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:01:22.214560 | orchestrator | 2026-04-06 05:01:22.214572 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-06 05:01:22.214600 | orchestrator | Monday 06 April 2026 05:01:20 +0000 (0:00:02.328) 0:00:30.367 ********** 2026-04-06 05:01:22.214613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214635 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214679 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:22.214704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-06 05:01:26.898760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:26.898870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:26.898892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:26.898900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:26.898907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:26.898928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-06 05:01:26.898941 | orchestrator | 2026-04-06 05:01:26.898950 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-06 05:01:26.898958 | orchestrator | Monday 06 April 2026 05:01:24 +0000 (0:00:03.556) 0:00:33.923 ********** 2026-04-06 05:01:26.898966 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 05:01:26.898975 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:01:26.898982 | orchestrator | } 2026-04-06 05:01:26.898989 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 05:01:26.898996 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:01:26.899003 | orchestrator | } 2026-04-06 05:01:26.899010 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 05:01:26.899017 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:01:26.899024 | orchestrator | } 2026-04-06 05:01:26.899030 | orchestrator | changed: [testbed-node-3] => { 2026-04-06 05:01:26.899037 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:01:26.899044 | orchestrator | } 2026-04-06 05:01:26.899051 | orchestrator | changed: [testbed-node-4] => { 2026-04-06 05:01:26.899058 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:01:26.899065 | orchestrator | } 2026-04-06 05:01:26.899072 | orchestrator | changed: [testbed-node-5] => { 2026-04-06 05:01:26.899079 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:01:26.899086 | orchestrator | } 2026-04-06 05:01:26.899093 | orchestrator | 2026-04-06 05:01:26.899100 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 05:01:26.899107 | orchestrator | Monday 06 April 2026 05:01:26 +0000 (0:00:02.085) 0:00:36.008 ********** 2026-04-06 05:01:26.899114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-06 05:01:26.899126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-06 05:01:26.899134 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:01:26.899141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-06 05:01:26.899153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-06 05:01:26.899165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-06 05:02:01.110350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-06 05:02:01.110466 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:02:01.110484 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:02:01.110511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-06 05:02:01.110526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-06 05:02:01.110538 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:02:01.110550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-06 05:02:01.110584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-06 05:02:01.110596 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:02:01.110626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-06 05:02:01.110638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-06 05:02:01.110650 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:02:01.110661 | orchestrator | 2026-04-06 05:02:01.110673 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 05:02:01.110692 | orchestrator | Monday 06 April 2026 05:01:29 +0000 (0:00:02.849) 0:00:38.858 ********** 2026-04-06 05:02:01.110704 | orchestrator | 2026-04-06 05:02:01.110715 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 05:02:01.110726 | orchestrator | Monday 06 April 2026 05:01:29 +0000 (0:00:00.738) 0:00:39.597 ********** 2026-04-06 05:02:01.110736 | orchestrator | 2026-04-06 05:02:01.110747 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 05:02:01.110758 | orchestrator | Monday 06 April 2026 05:01:30 +0000 (0:00:00.536) 0:00:40.134 ********** 2026-04-06 05:02:01.110769 | orchestrator | 2026-04-06 05:02:01.110780 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 05:02:01.110791 | orchestrator | Monday 06 April 2026 05:01:30 +0000 (0:00:00.497) 0:00:40.632 ********** 2026-04-06 05:02:01.110810 | orchestrator | 2026-04-06 05:02:01.110822 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 05:02:01.110833 | orchestrator | Monday 06 April 2026 05:01:31 +0000 (0:00:00.511) 0:00:41.143 ********** 2026-04-06 05:02:01.110847 | orchestrator | 2026-04-06 05:02:01.110861 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-06 05:02:01.110875 | orchestrator | Monday 06 April 2026 05:01:32 +0000 (0:00:00.532) 0:00:41.675 ********** 2026-04-06 05:02:01.110888 | orchestrator | 2026-04-06 05:02:01.110902 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-06 05:02:01.110915 | orchestrator | Monday 06 April 2026 05:01:32 +0000 (0:00:00.922) 0:00:42.598 ********** 2026-04-06 05:02:01.110929 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:02:01.110942 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:02:01.110956 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:02:01.110969 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:02:01.110983 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:02:01.110996 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:02:01.111010 | orchestrator | 2026-04-06 05:02:01.111023 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-06 05:02:01.111038 | orchestrator | Monday 06 April 2026 05:01:44 +0000 (0:00:11.810) 0:00:54.408 ********** 2026-04-06 05:02:01.111051 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:02:01.111065 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:02:01.111156 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:02:01.111172 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:02:01.111186 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:02:01.111200 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:02:01.111210 | orchestrator | 2026-04-06 05:02:01.111222 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-06 05:02:01.111267 | orchestrator | Monday 06 April 2026 05:01:47 +0000 (0:00:02.383) 0:00:56.792 ********** 2026-04-06 05:02:01.111279 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:02:01.111290 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:02:01.111301 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:02:01.111312 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:02:01.111323 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:02:01.111333 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:02:01.111344 | orchestrator | 2026-04-06 05:02:01.111355 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-06 05:02:01.111366 | orchestrator | Monday 06 April 2026 05:01:58 +0000 (0:00:11.326) 0:01:08.119 ********** 2026-04-06 05:02:01.111378 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-06 05:02:01.111403 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-06 05:02:01.111415 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-06 05:02:01.111426 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-06 05:02:01.111437 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-06 05:02:01.111457 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-06 05:02:13.907599 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-06 05:02:13.907703 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-06 05:02:13.907715 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-06 05:02:13.907722 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-06 05:02:13.907753 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-06 05:02:13.907761 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-06 05:02:13.907768 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 05:02:13.907776 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 05:02:13.907783 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 05:02:13.907790 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 05:02:13.907810 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 05:02:13.907818 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-06 05:02:13.907825 | orchestrator | 2026-04-06 05:02:13.907834 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-06 05:02:13.907843 | orchestrator | Monday 06 April 2026 05:02:06 +0000 (0:00:07.722) 0:01:15.841 ********** 2026-04-06 05:02:13.907851 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-06 05:02:13.907858 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:02:13.907867 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-06 05:02:13.907875 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:02:13.907881 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-06 05:02:13.907889 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:02:13.907897 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-04-06 05:02:13.907904 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-04-06 05:02:13.907911 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-04-06 05:02:13.907919 | orchestrator | 2026-04-06 05:02:13.907927 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-06 05:02:13.907934 | orchestrator | Monday 06 April 2026 05:02:09 +0000 (0:00:03.203) 0:01:19.045 ********** 2026-04-06 05:02:13.907941 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-06 05:02:13.907949 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:02:13.907956 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-06 05:02:13.907963 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:02:13.907970 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-06 05:02:13.907978 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:02:13.907985 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-06 05:02:13.907992 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-06 05:02:13.907999 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-06 05:02:13.908006 | orchestrator | 2026-04-06 05:02:13.908013 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 05:02:13.908021 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 05:02:13.908030 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 05:02:13.908037 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 05:02:13.908044 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 05:02:13.908051 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 05:02:13.908066 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 05:02:13.908074 | orchestrator | 2026-04-06 05:02:13.908082 | orchestrator | 2026-04-06 05:02:13.908089 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 05:02:13.908096 | orchestrator | Monday 06 April 2026 05:02:13 +0000 (0:00:04.115) 0:01:23.161 ********** 2026-04-06 05:02:13.908103 | orchestrator | =============================================================================== 2026-04-06 05:02:13.908111 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.81s 2026-04-06 05:02:13.908135 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.33s 2026-04-06 05:02:13.908143 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.72s 2026-04-06 05:02:13.908151 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.62s 2026-04-06 05:02:13.908159 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.12s 2026-04-06 05:02:13.908183 | orchestrator | openvswitch : include_tasks --------------------------------------------- 4.07s 2026-04-06 05:02:13.908192 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.74s 2026-04-06 05:02:13.908210 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.56s 2026-04-06 05:02:13.908219 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.20s 2026-04-06 05:02:13.908261 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.85s 2026-04-06 05:02:13.908272 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.75s 2026-04-06 05:02:13.908280 | orchestrator | module-load : Load modules ---------------------------------------------- 2.75s 2026-04-06 05:02:13.908288 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.53s 2026-04-06 05:02:13.908296 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.50s 2026-04-06 05:02:13.908304 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.39s 2026-04-06 05:02:13.908312 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.38s 2026-04-06 05:02:13.908320 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.33s 2026-04-06 05:02:13.908335 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.33s 2026-04-06 05:02:13.908343 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.23s 2026-04-06 05:02:13.908351 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.09s 2026-04-06 05:02:14.092542 | orchestrator | + osism apply -a upgrade ovn 2026-04-06 05:02:15.495594 | orchestrator | 2026-04-06 05:02:15 | INFO  | Prepare task for execution of ovn. 2026-04-06 05:02:15.572215 | orchestrator | 2026-04-06 05:02:15 | INFO  | Task 679202cf-bba1-413e-80d9-47d1261d1eb6 (ovn) was prepared for execution. 2026-04-06 05:02:15.572454 | orchestrator | 2026-04-06 05:02:15 | INFO  | It takes a moment until task 679202cf-bba1-413e-80d9-47d1261d1eb6 (ovn) has been started and output is visible here. 2026-04-06 05:02:29.089614 | orchestrator | 2026-04-06 05:02:29.089732 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 05:02:29.089745 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-06 05:02:29.089754 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-06 05:02:29.089769 | orchestrator | 2026-04-06 05:02:29.089775 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 05:02:29.089782 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-06 05:02:29.089809 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-06 05:02:29.089823 | orchestrator | Monday 06 April 2026 05:02:20 +0000 (0:00:01.206) 0:00:01.206 ********** 2026-04-06 05:02:29.089830 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:02:29.089838 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:02:29.089844 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:02:29.089851 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:02:29.089858 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:02:29.089864 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:02:29.089871 | orchestrator | 2026-04-06 05:02:29.089878 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 05:02:29.089885 | orchestrator | Monday 06 April 2026 05:02:21 +0000 (0:00:01.478) 0:00:02.684 ********** 2026-04-06 05:02:29.089891 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-06 05:02:29.089899 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-06 05:02:29.089905 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-06 05:02:29.089912 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-06 05:02:29.089919 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-06 05:02:29.089926 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-06 05:02:29.089933 | orchestrator | 2026-04-06 05:02:29.089939 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-06 05:02:29.089946 | orchestrator | 2026-04-06 05:02:29.089953 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-06 05:02:29.089959 | orchestrator | Monday 06 April 2026 05:02:23 +0000 (0:00:01.523) 0:00:04.208 ********** 2026-04-06 05:02:29.089966 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:02:29.089974 | orchestrator | 2026-04-06 05:02:29.089981 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-06 05:02:29.089987 | orchestrator | Monday 06 April 2026 05:02:24 +0000 (0:00:01.710) 0:00:05.919 ********** 2026-04-06 05:02:29.089996 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090005 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090012 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090077 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090107 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090115 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090122 | orchestrator | 2026-04-06 05:02:29.090129 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-06 05:02:29.090136 | orchestrator | Monday 06 April 2026 05:02:26 +0000 (0:00:01.855) 0:00:07.774 ********** 2026-04-06 05:02:29.090143 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090150 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090157 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090165 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090174 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090182 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090195 | orchestrator | 2026-04-06 05:02:29.090207 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-06 05:02:29.090280 | orchestrator | Monday 06 April 2026 05:02:28 +0000 (0:00:01.894) 0:00:09.668 ********** 2026-04-06 05:02:29.090290 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:29.090306 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696054 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696181 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696200 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696300 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696315 | orchestrator | 2026-04-06 05:02:33.696328 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-06 05:02:33.696340 | orchestrator | Monday 06 April 2026 05:02:29 +0000 (0:00:00.997) 0:00:10.666 ********** 2026-04-06 05:02:33.696352 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696364 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696416 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696429 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696459 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696471 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696483 | orchestrator | 2026-04-06 05:02:33.696494 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-06 05:02:33.696505 | orchestrator | Monday 06 April 2026 05:02:31 +0000 (0:00:02.056) 0:00:12.723 ********** 2026-04-06 05:02:33.696517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:02:33.696598 | orchestrator | 2026-04-06 05:02:33.696613 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-06 05:02:33.696626 | orchestrator | Monday 06 April 2026 05:02:33 +0000 (0:00:01.480) 0:00:14.204 ********** 2026-04-06 05:02:33.696641 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 05:02:33.696655 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:02:33.696668 | orchestrator | } 2026-04-06 05:02:33.696680 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 05:02:33.696693 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:02:33.696705 | orchestrator | } 2026-04-06 05:02:33.696718 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 05:02:33.696731 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:02:33.696744 | orchestrator | } 2026-04-06 05:02:33.696755 | orchestrator | changed: [testbed-node-3] => { 2026-04-06 05:02:33.696766 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:02:33.696777 | orchestrator | } 2026-04-06 05:02:33.696788 | orchestrator | changed: [testbed-node-4] => { 2026-04-06 05:02:33.696836 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:02:33.696848 | orchestrator | } 2026-04-06 05:02:33.696866 | orchestrator | changed: [testbed-node-5] => { 2026-04-06 05:02:53.749475 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:02:53.749591 | orchestrator | } 2026-04-06 05:02:53.749608 | orchestrator | 2026-04-06 05:02:53.749621 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 05:02:53.749634 | orchestrator | Monday 06 April 2026 05:02:33 +0000 (0:00:00.685) 0:00:14.889 ********** 2026-04-06 05:02:53.749649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:02:53.749665 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:02:53.749678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:02:53.749686 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:02:53.749693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:02:53.749722 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:02:53.749730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:02:53.749737 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:02:53.749744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:02:53.749751 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:02:53.749770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:02:53.749778 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:02:53.749790 | orchestrator | 2026-04-06 05:02:53.749801 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-06 05:02:53.749812 | orchestrator | Monday 06 April 2026 05:02:35 +0000 (0:00:01.698) 0:00:16.588 ********** 2026-04-06 05:02:53.749821 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:02:53.749829 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:02:53.749836 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:02:53.749842 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:02:53.749849 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:02:53.749855 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:02:53.749862 | orchestrator | 2026-04-06 05:02:53.749869 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-06 05:02:53.749875 | orchestrator | Monday 06 April 2026 05:02:38 +0000 (0:00:02.620) 0:00:19.209 ********** 2026-04-06 05:02:53.749882 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-06 05:02:53.749889 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-06 05:02:53.749896 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-06 05:02:53.749903 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-06 05:02:53.749923 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-06 05:02:53.749930 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-06 05:02:53.749937 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 05:02:53.749944 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 05:02:53.749952 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 05:02:53.749963 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 05:02:53.749981 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 05:02:53.749993 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-06 05:02:53.750004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-06 05:02:53.750072 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-06 05:02:53.750083 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-06 05:02:53.750091 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-06 05:02:53.750100 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-06 05:02:53.750116 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-06 05:02:53.750125 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 05:02:53.750134 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 05:02:53.750141 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 05:02:53.750150 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 05:02:53.750158 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 05:02:53.750166 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-06 05:02:53.750174 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 05:02:53.750182 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 05:02:53.750239 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 05:02:53.750247 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 05:02:53.750256 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 05:02:53.750264 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-06 05:02:53.750272 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 05:02:53.750280 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 05:02:53.750287 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 05:02:53.750300 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 05:02:53.750308 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 05:02:53.750316 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-06 05:02:53.750324 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-06 05:02:53.750333 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-06 05:02:53.750342 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-06 05:02:53.750349 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-06 05:02:53.750375 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-06 05:02:53.750381 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-06 05:02:53.750395 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-06 05:05:21.088768 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-06 05:05:21.088882 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-06 05:05:21.088898 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-06 05:05:21.088910 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-06 05:05:21.088921 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-06 05:05:21.088933 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-06 05:05:21.088946 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-06 05:05:21.088957 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-06 05:05:21.088968 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-06 05:05:21.088980 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-06 05:05:21.088991 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-06 05:05:21.089002 | orchestrator | 2026-04-06 05:05:21.089014 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 05:05:21.089026 | orchestrator | Monday 06 April 2026 05:02:57 +0000 (0:00:19.368) 0:00:38.577 ********** 2026-04-06 05:05:21.089037 | orchestrator | 2026-04-06 05:05:21.089048 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 05:05:21.089059 | orchestrator | Monday 06 April 2026 05:02:57 +0000 (0:00:00.081) 0:00:38.658 ********** 2026-04-06 05:05:21.089111 | orchestrator | 2026-04-06 05:05:21.089122 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 05:05:21.089134 | orchestrator | Monday 06 April 2026 05:02:57 +0000 (0:00:00.072) 0:00:38.731 ********** 2026-04-06 05:05:21.089144 | orchestrator | 2026-04-06 05:05:21.089155 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 05:05:21.089166 | orchestrator | Monday 06 April 2026 05:02:57 +0000 (0:00:00.236) 0:00:38.967 ********** 2026-04-06 05:05:21.089178 | orchestrator | 2026-04-06 05:05:21.089189 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 05:05:21.089201 | orchestrator | Monday 06 April 2026 05:02:57 +0000 (0:00:00.074) 0:00:39.042 ********** 2026-04-06 05:05:21.089212 | orchestrator | 2026-04-06 05:05:21.089222 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-06 05:05:21.089233 | orchestrator | Monday 06 April 2026 05:02:58 +0000 (0:00:00.073) 0:00:39.115 ********** 2026-04-06 05:05:21.089244 | orchestrator | 2026-04-06 05:05:21.089255 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-06 05:05:21.089267 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-06 05:05:21.089278 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-06 05:05:21.089330 | orchestrator | Monday 06 April 2026 05:02:58 +0000 (0:00:00.073) 0:00:39.189 ********** 2026-04-06 05:05:21.089344 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:05:21.089358 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:05:21.089370 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:05:21.089383 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:05:21.089395 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:05:21.089423 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:05:21.089437 | orchestrator | 2026-04-06 05:05:21.089455 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-06 05:05:21.089474 | orchestrator | 2026-04-06 05:05:21.089493 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-06 05:05:21.089513 | orchestrator | Monday 06 April 2026 05:05:09 +0000 (0:02:11.229) 0:02:50.418 ********** 2026-04-06 05:05:21.089531 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:05:21.089550 | orchestrator | 2026-04-06 05:05:21.089568 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-06 05:05:21.089586 | orchestrator | Monday 06 April 2026 05:05:10 +0000 (0:00:00.972) 0:02:51.390 ********** 2026-04-06 05:05:21.089605 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:05:21.089624 | orchestrator | 2026-04-06 05:05:21.089644 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-06 05:05:21.089664 | orchestrator | Monday 06 April 2026 05:05:11 +0000 (0:00:01.156) 0:02:52.548 ********** 2026-04-06 05:05:21.089683 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.089701 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.089718 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.089734 | orchestrator | 2026-04-06 05:05:21.089752 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-06 05:05:21.089795 | orchestrator | Monday 06 April 2026 05:05:12 +0000 (0:00:00.776) 0:02:53.324 ********** 2026-04-06 05:05:21.089814 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.089833 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.089852 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.089870 | orchestrator | 2026-04-06 05:05:21.089888 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-06 05:05:21.089907 | orchestrator | Monday 06 April 2026 05:05:12 +0000 (0:00:00.532) 0:02:53.857 ********** 2026-04-06 05:05:21.089926 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.089945 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.089964 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.089977 | orchestrator | 2026-04-06 05:05:21.089988 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-06 05:05:21.089999 | orchestrator | Monday 06 April 2026 05:05:13 +0000 (0:00:00.416) 0:02:54.273 ********** 2026-04-06 05:05:21.090010 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.090122 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.090135 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.090147 | orchestrator | 2026-04-06 05:05:21.090158 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-06 05:05:21.090169 | orchestrator | Monday 06 April 2026 05:05:13 +0000 (0:00:00.346) 0:02:54.619 ********** 2026-04-06 05:05:21.090180 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.090191 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.090202 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.090213 | orchestrator | 2026-04-06 05:05:21.090224 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-06 05:05:21.090235 | orchestrator | Monday 06 April 2026 05:05:13 +0000 (0:00:00.382) 0:02:55.002 ********** 2026-04-06 05:05:21.090246 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:05:21.090273 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:05:21.090285 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:05:21.090296 | orchestrator | 2026-04-06 05:05:21.090307 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-06 05:05:21.090318 | orchestrator | Monday 06 April 2026 05:05:14 +0000 (0:00:00.548) 0:02:55.551 ********** 2026-04-06 05:05:21.090329 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.090340 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.090351 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.090362 | orchestrator | 2026-04-06 05:05:21.090372 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-06 05:05:21.090384 | orchestrator | Monday 06 April 2026 05:05:15 +0000 (0:00:00.805) 0:02:56.356 ********** 2026-04-06 05:05:21.090395 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.090406 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.090416 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.090431 | orchestrator | 2026-04-06 05:05:21.090450 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-06 05:05:21.090468 | orchestrator | Monday 06 April 2026 05:05:15 +0000 (0:00:00.352) 0:02:56.709 ********** 2026-04-06 05:05:21.090488 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.090508 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.090529 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.090545 | orchestrator | 2026-04-06 05:05:21.090563 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-06 05:05:21.090581 | orchestrator | Monday 06 April 2026 05:05:16 +0000 (0:00:00.827) 0:02:57.536 ********** 2026-04-06 05:05:21.090599 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.090618 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.090637 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.090656 | orchestrator | 2026-04-06 05:05:21.090673 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-06 05:05:21.090713 | orchestrator | Monday 06 April 2026 05:05:16 +0000 (0:00:00.548) 0:02:58.085 ********** 2026-04-06 05:05:21.090731 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:05:21.090751 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:05:21.090769 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:05:21.090788 | orchestrator | 2026-04-06 05:05:21.090801 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-06 05:05:21.090812 | orchestrator | Monday 06 April 2026 05:05:17 +0000 (0:00:00.345) 0:02:58.430 ********** 2026-04-06 05:05:21.090823 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:05:21.090834 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:05:21.090844 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:05:21.090855 | orchestrator | 2026-04-06 05:05:21.090866 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-06 05:05:21.090877 | orchestrator | Monday 06 April 2026 05:05:17 +0000 (0:00:00.347) 0:02:58.778 ********** 2026-04-06 05:05:21.090888 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.090899 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.090910 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.090920 | orchestrator | 2026-04-06 05:05:21.090942 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-06 05:05:21.090963 | orchestrator | Monday 06 April 2026 05:05:18 +0000 (0:00:01.060) 0:02:59.839 ********** 2026-04-06 05:05:21.090980 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.090998 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.091014 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.091030 | orchestrator | 2026-04-06 05:05:21.091048 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-06 05:05:21.091104 | orchestrator | Monday 06 April 2026 05:05:19 +0000 (0:00:00.601) 0:03:00.441 ********** 2026-04-06 05:05:21.091118 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.091129 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.091148 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.091167 | orchestrator | 2026-04-06 05:05:21.091202 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-06 05:05:21.091221 | orchestrator | Monday 06 April 2026 05:05:20 +0000 (0:00:00.815) 0:03:01.256 ********** 2026-04-06 05:05:21.091241 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:05:21.091260 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:05:21.091279 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:05:21.091298 | orchestrator | 2026-04-06 05:05:21.091317 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-06 05:05:21.091336 | orchestrator | Monday 06 April 2026 05:05:20 +0000 (0:00:00.378) 0:03:01.634 ********** 2026-04-06 05:05:21.091356 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:05:21.091377 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:05:21.091395 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:05:21.091411 | orchestrator | 2026-04-06 05:05:21.091438 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-06 05:05:27.863188 | orchestrator | Monday 06 April 2026 05:05:21 +0000 (0:00:00.545) 0:03:02.180 ********** 2026-04-06 05:05:27.863290 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:05:27.863305 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:05:27.863315 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:05:27.863326 | orchestrator | 2026-04-06 05:05:27.863336 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-06 05:05:27.863346 | orchestrator | Monday 06 April 2026 05:05:21 +0000 (0:00:00.703) 0:03:02.883 ********** 2026-04-06 05:05:27.863360 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863374 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863385 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863397 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863428 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863473 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863513 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:27.863548 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:27.863581 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:27.863611 | orchestrator | 2026-04-06 05:05:27.863621 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-06 05:05:27.863631 | orchestrator | Monday 06 April 2026 05:05:24 +0000 (0:00:02.907) 0:03:05.790 ********** 2026-04-06 05:05:27.863648 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:27.863678 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495304 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495439 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495457 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495470 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:38.495546 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:38.495592 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:38.495618 | orchestrator | 2026-04-06 05:05:38.495631 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-06 05:05:38.495644 | orchestrator | Monday 06 April 2026 05:05:29 +0000 (0:00:05.091) 0:03:10.882 ********** 2026-04-06 05:05:38.495656 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-06 05:05:38.495668 | orchestrator | 2026-04-06 05:05:38.495679 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-06 05:05:38.495691 | orchestrator | Monday 06 April 2026 05:05:31 +0000 (0:00:01.270) 0:03:12.152 ********** 2026-04-06 05:05:38.495702 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:05:38.495715 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:05:38.495725 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:05:38.495736 | orchestrator | 2026-04-06 05:05:38.495748 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-06 05:05:38.495759 | orchestrator | Monday 06 April 2026 05:05:31 +0000 (0:00:00.700) 0:03:12.852 ********** 2026-04-06 05:05:38.495769 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:05:38.495781 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:05:38.495801 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:05:38.495812 | orchestrator | 2026-04-06 05:05:38.495823 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-06 05:05:38.495834 | orchestrator | Monday 06 April 2026 05:05:33 +0000 (0:00:01.825) 0:03:14.677 ********** 2026-04-06 05:05:38.495845 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:05:38.495856 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:05:38.495867 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:05:38.495878 | orchestrator | 2026-04-06 05:05:38.495889 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-06 05:05:38.495900 | orchestrator | Monday 06 April 2026 05:05:35 +0000 (0:00:01.843) 0:03:16.521 ********** 2026-04-06 05:05:38.495912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:38.495975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:41.697751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:41.697873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:41.697889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:41.697901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:41.697926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:41.697937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:05:41.697947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:41.697958 | orchestrator | 2026-04-06 05:05:41.697970 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-06 05:05:41.697981 | orchestrator | Monday 06 April 2026 05:05:39 +0000 (0:00:04.091) 0:03:20.613 ********** 2026-04-06 05:05:41.697992 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 05:05:41.698004 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:05:41.698103 | orchestrator | } 2026-04-06 05:05:41.698117 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 05:05:41.698127 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:05:41.698136 | orchestrator | } 2026-04-06 05:05:41.698146 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 05:05:41.698165 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:05:41.698175 | orchestrator | } 2026-04-06 05:05:41.698184 | orchestrator | 2026-04-06 05:05:41.698211 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 05:05:41.698222 | orchestrator | Monday 06 April 2026 05:05:39 +0000 (0:00:00.352) 0:03:20.965 ********** 2026-04-06 05:05:41.698233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:41.698244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:41.698255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:41.698271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:41.698282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:41.698296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:41.698307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:05:41.698333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:07:22.185245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 05:07:22.185350 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 05:07:22.185359 | orchestrator | 2026-04-06 05:07:22.185365 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-06 05:07:22.185371 | orchestrator | Monday 06 April 2026 05:05:42 +0000 (0:00:02.733) 0:03:23.698 ********** 2026-04-06 05:07:22.185375 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-06 05:07:22.185380 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-06 05:07:22.185384 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-06 05:07:22.185388 | orchestrator | 2026-04-06 05:07:22.185392 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-06 05:07:22.185397 | orchestrator | Monday 06 April 2026 05:06:06 +0000 (0:00:23.448) 0:03:47.147 ********** 2026-04-06 05:07:22.185401 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 05:07:22.185405 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:07:22.185422 | orchestrator | } 2026-04-06 05:07:22.185426 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 05:07:22.185430 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:07:22.185434 | orchestrator | } 2026-04-06 05:07:22.185437 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 05:07:22.185441 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:07:22.185445 | orchestrator | } 2026-04-06 05:07:22.185448 | orchestrator | 2026-04-06 05:07:22.185452 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-06 05:07:22.185456 | orchestrator | Monday 06 April 2026 05:06:06 +0000 (0:00:00.815) 0:03:47.963 ********** 2026-04-06 05:07:22.185460 | orchestrator | 2026-04-06 05:07:22.185464 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-06 05:07:22.185467 | orchestrator | Monday 06 April 2026 05:06:06 +0000 (0:00:00.093) 0:03:48.056 ********** 2026-04-06 05:07:22.185472 | orchestrator | 2026-04-06 05:07:22.185475 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-06 05:07:22.185479 | orchestrator | Monday 06 April 2026 05:06:07 +0000 (0:00:00.076) 0:03:48.132 ********** 2026-04-06 05:07:22.185497 | orchestrator | 2026-04-06 05:07:22.185501 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-06 05:07:22.185505 | orchestrator | Monday 06 April 2026 05:06:07 +0000 (0:00:00.077) 0:03:48.209 ********** 2026-04-06 05:07:22.185508 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:07:22.185512 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:07:22.185516 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:07:22.185520 | orchestrator | 2026-04-06 05:07:22.185524 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-06 05:07:22.185527 | orchestrator | Monday 06 April 2026 05:06:22 +0000 (0:00:15.567) 0:04:03.776 ********** 2026-04-06 05:07:22.185531 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:07:22.185535 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:07:22.185538 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:07:22.185542 | orchestrator | 2026-04-06 05:07:22.185546 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-04-06 05:07:22.185550 | orchestrator | Monday 06 April 2026 05:06:38 +0000 (0:00:15.444) 0:04:19.221 ********** 2026-04-06 05:07:22.185553 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-06 05:07:22.185557 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-06 05:07:22.185561 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-06 05:07:22.185565 | orchestrator | 2026-04-06 05:07:22.185568 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-06 05:07:22.185572 | orchestrator | Monday 06 April 2026 05:06:53 +0000 (0:00:15.154) 0:04:34.376 ********** 2026-04-06 05:07:22.185576 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:07:22.185579 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:07:22.185583 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:07:22.185587 | orchestrator | 2026-04-06 05:07:22.185591 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-06 05:07:22.185594 | orchestrator | Monday 06 April 2026 05:07:09 +0000 (0:00:16.223) 0:04:50.600 ********** 2026-04-06 05:07:22.185598 | orchestrator | Pausing for 5 seconds 2026-04-06 05:07:22.185602 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:07:22.185606 | orchestrator | 2026-04-06 05:07:22.185610 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-06 05:07:22.185614 | orchestrator | Monday 06 April 2026 05:07:14 +0000 (0:00:05.166) 0:04:55.767 ********** 2026-04-06 05:07:22.185618 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:07:22.185621 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:07:22.185625 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:07:22.185629 | orchestrator | 2026-04-06 05:07:22.185632 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-06 05:07:22.185647 | orchestrator | Monday 06 April 2026 05:07:15 +0000 (0:00:00.842) 0:04:56.610 ********** 2026-04-06 05:07:22.185651 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:07:22.185655 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:07:22.185659 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:07:22.185662 | orchestrator | 2026-04-06 05:07:22.185666 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-06 05:07:22.185670 | orchestrator | Monday 06 April 2026 05:07:16 +0000 (0:00:00.752) 0:04:57.362 ********** 2026-04-06 05:07:22.185674 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:07:22.185677 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:07:22.185681 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:07:22.185685 | orchestrator | 2026-04-06 05:07:22.185688 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-06 05:07:22.185692 | orchestrator | Monday 06 April 2026 05:07:17 +0000 (0:00:00.823) 0:04:58.186 ********** 2026-04-06 05:07:22.185696 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:07:22.185699 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:07:22.185703 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:07:22.185707 | orchestrator | 2026-04-06 05:07:22.185711 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-06 05:07:22.185718 | orchestrator | Monday 06 April 2026 05:07:17 +0000 (0:00:00.878) 0:04:59.064 ********** 2026-04-06 05:07:22.185722 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:07:22.185726 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:07:22.185729 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:07:22.185733 | orchestrator | 2026-04-06 05:07:22.185737 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-06 05:07:22.185741 | orchestrator | Monday 06 April 2026 05:07:18 +0000 (0:00:00.759) 0:04:59.824 ********** 2026-04-06 05:07:22.185744 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:07:22.185748 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:07:22.185752 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:07:22.185755 | orchestrator | 2026-04-06 05:07:22.185759 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-06 05:07:22.185763 | orchestrator | Monday 06 April 2026 05:07:19 +0000 (0:00:01.021) 0:05:00.845 ********** 2026-04-06 05:07:22.185767 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-06 05:07:22.185770 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-06 05:07:22.185774 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-06 05:07:22.185778 | orchestrator | 2026-04-06 05:07:22.185781 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 05:07:22.185789 | orchestrator | testbed-node-0 : ok=48  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-06 05:07:22.185795 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 05:07:22.185798 | orchestrator | testbed-node-2 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 05:07:22.185802 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 05:07:22.185806 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 05:07:22.185810 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 05:07:22.185813 | orchestrator | 2026-04-06 05:07:22.185817 | orchestrator | 2026-04-06 05:07:22.185821 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 05:07:22.185825 | orchestrator | Monday 06 April 2026 05:07:22 +0000 (0:00:02.416) 0:05:03.262 ********** 2026-04-06 05:07:22.185829 | orchestrator | =============================================================================== 2026-04-06 05:07:22.185832 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.23s 2026-04-06 05:07:22.185836 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 23.45s 2026-04-06 05:07:22.185840 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.37s 2026-04-06 05:07:22.185843 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.22s 2026-04-06 05:07:22.185847 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.57s 2026-04-06 05:07:22.185851 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.44s 2026-04-06 05:07:22.185854 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 15.16s 2026-04-06 05:07:22.185858 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 5.17s 2026-04-06 05:07:22.185862 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.09s 2026-04-06 05:07:22.185866 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.09s 2026-04-06 05:07:22.185869 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.91s 2026-04-06 05:07:22.185877 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.73s 2026-04-06 05:07:22.185880 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.62s 2026-04-06 05:07:22.185884 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.42s 2026-04-06 05:07:22.185888 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.06s 2026-04-06 05:07:22.185892 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.89s 2026-04-06 05:07:22.185897 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.85s 2026-04-06 05:07:22.567358 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.84s 2026-04-06 05:07:22.567428 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 1.83s 2026-04-06 05:07:22.567434 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.71s 2026-04-06 05:07:22.754659 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-06 05:07:22.754773 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-06 05:07:22.754800 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-04-06 05:07:22.763707 | orchestrator | + set -e 2026-04-06 05:07:22.763780 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 05:07:22.763793 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 05:07:22.763805 | orchestrator | ++ INTERACTIVE=false 2026-04-06 05:07:22.763816 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 05:07:22.763827 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 05:07:22.763839 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-04-06 05:07:24.092789 | orchestrator | 2026-04-06 05:07:24 | INFO  | Prepare task for execution of ceph-rolling_update. 2026-04-06 05:07:24.157702 | orchestrator | 2026-04-06 05:07:24 | INFO  | Task e3935982-4fcc-4a67-9091-e34feec0da91 (ceph-rolling_update) was prepared for execution. 2026-04-06 05:07:24.157808 | orchestrator | 2026-04-06 05:07:24 | INFO  | It takes a moment until task e3935982-4fcc-4a67-9091-e34feec0da91 (ceph-rolling_update) has been started and output is visible here. 2026-04-06 05:08:24.078690 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-06 05:08:24.078808 | orchestrator | 2.16.14 2026-04-06 05:08:24.078826 | orchestrator | 2026-04-06 05:08:24.078839 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-04-06 05:08:24.078852 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-06 05:08:24.078864 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-06 05:08:24.078886 | orchestrator | 2026-04-06 05:08:24.078897 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-04-06 05:08:24.078924 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-06 05:08:24.078935 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-06 05:08:24.078957 | orchestrator | Monday 06 April 2026 05:07:31 +0000 (0:00:01.134) 0:00:01.134 ********** 2026-04-06 05:08:24.078969 | orchestrator | skipping: [localhost] 2026-04-06 05:08:24.078980 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-04-06 05:08:24.078991 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-04-06 05:08:24.079067 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-04-06 05:08:24.079078 | orchestrator | 2026-04-06 05:08:24.079089 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-04-06 05:08:24.079100 | orchestrator | 2026-04-06 05:08:24.079111 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-04-06 05:08:24.079123 | orchestrator | Monday 06 April 2026 05:07:32 +0000 (0:00:00.705) 0:00:01.839 ********** 2026-04-06 05:08:24.079163 | orchestrator | ok: [testbed-node-0] => { 2026-04-06 05:08:24.079175 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-06 05:08:24.079186 | orchestrator | } 2026-04-06 05:08:24.079197 | orchestrator | ok: [testbed-node-1] => { 2026-04-06 05:08:24.079208 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-06 05:08:24.079219 | orchestrator | } 2026-04-06 05:08:24.079233 | orchestrator | ok: [testbed-node-2] => { 2026-04-06 05:08:24.079246 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-06 05:08:24.079258 | orchestrator | } 2026-04-06 05:08:24.079271 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 05:08:24.079284 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-06 05:08:24.079298 | orchestrator | } 2026-04-06 05:08:24.079311 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 05:08:24.079324 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-06 05:08:24.079337 | orchestrator | } 2026-04-06 05:08:24.079351 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 05:08:24.079365 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-06 05:08:24.079378 | orchestrator | } 2026-04-06 05:08:24.079391 | orchestrator | ok: [testbed-manager] => { 2026-04-06 05:08:24.079405 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-06 05:08:24.079418 | orchestrator | } 2026-04-06 05:08:24.079431 | orchestrator | 2026-04-06 05:08:24.079445 | orchestrator | TASK [Gather facts] ************************************************************ 2026-04-06 05:08:24.079459 | orchestrator | Monday 06 April 2026 05:07:34 +0000 (0:00:02.643) 0:00:04.482 ********** 2026-04-06 05:08:24.079472 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:24.079485 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:24.079496 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:24.079508 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:24.079518 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:24.079529 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:24.079540 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:24.079550 | orchestrator | 2026-04-06 05:08:24.079562 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-04-06 05:08:24.079572 | orchestrator | Monday 06 April 2026 05:07:40 +0000 (0:00:06.101) 0:00:10.584 ********** 2026-04-06 05:08:24.079583 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:08:24.079594 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:08:24.079605 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:08:24.079616 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:08:24.079627 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:08:24.079637 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:08:24.079648 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:08:24.079660 | orchestrator | 2026-04-06 05:08:24.079670 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-04-06 05:08:24.079681 | orchestrator | Monday 06 April 2026 05:08:11 +0000 (0:00:30.456) 0:00:41.041 ********** 2026-04-06 05:08:24.079692 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:24.079703 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:24.079713 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:24.079724 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:24.079735 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:24.079745 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:24.079756 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:24.079767 | orchestrator | 2026-04-06 05:08:24.079778 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:08:24.079797 | orchestrator | Monday 06 April 2026 05:08:12 +0000 (0:00:00.925) 0:00:41.966 ********** 2026-04-06 05:08:24.079826 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-06 05:08:24.079839 | orchestrator | 2026-04-06 05:08:24.079850 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:08:24.079861 | orchestrator | Monday 06 April 2026 05:08:14 +0000 (0:00:01.924) 0:00:43.890 ********** 2026-04-06 05:08:24.079873 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:24.079884 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:24.079895 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:24.079905 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:24.079916 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:24.079927 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:24.079938 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:24.079949 | orchestrator | 2026-04-06 05:08:24.079960 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:08:24.079977 | orchestrator | Monday 06 April 2026 05:08:15 +0000 (0:00:01.363) 0:00:45.254 ********** 2026-04-06 05:08:24.079988 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:24.080018 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:24.080030 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:24.080041 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:24.080052 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:24.080062 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:24.080074 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:24.080085 | orchestrator | 2026-04-06 05:08:24.080096 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:08:24.080107 | orchestrator | Monday 06 April 2026 05:08:16 +0000 (0:00:00.748) 0:00:46.002 ********** 2026-04-06 05:08:24.080118 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:24.080129 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:24.080140 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:24.080151 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:24.080162 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:24.080173 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:24.080184 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:24.080195 | orchestrator | 2026-04-06 05:08:24.080206 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:08:24.080217 | orchestrator | Monday 06 April 2026 05:08:17 +0000 (0:00:01.355) 0:00:47.358 ********** 2026-04-06 05:08:24.080228 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:24.080239 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:24.080250 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:24.080261 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:24.080272 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:24.080283 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:24.080294 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:24.080305 | orchestrator | 2026-04-06 05:08:24.080316 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:08:24.080327 | orchestrator | Monday 06 April 2026 05:08:18 +0000 (0:00:00.824) 0:00:48.182 ********** 2026-04-06 05:08:24.080338 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:24.080349 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:24.080360 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:24.080371 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:24.080382 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:24.080393 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:24.080404 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:24.080415 | orchestrator | 2026-04-06 05:08:24.080426 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:08:24.080437 | orchestrator | Monday 06 April 2026 05:08:19 +0000 (0:00:01.009) 0:00:49.191 ********** 2026-04-06 05:08:24.080448 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:24.080459 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:24.080477 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:24.080488 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:24.080499 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:24.080511 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:24.080522 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:24.080533 | orchestrator | 2026-04-06 05:08:24.080544 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:08:24.080555 | orchestrator | Monday 06 April 2026 05:08:20 +0000 (0:00:00.738) 0:00:49.930 ********** 2026-04-06 05:08:24.080566 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:24.080577 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:24.080588 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:24.080599 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:24.080610 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:24.080621 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:24.080632 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:24.080643 | orchestrator | 2026-04-06 05:08:24.080654 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:08:24.080665 | orchestrator | Monday 06 April 2026 05:08:21 +0000 (0:00:00.993) 0:00:50.924 ********** 2026-04-06 05:08:24.080676 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:24.080688 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:24.080699 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:24.080710 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:24.080720 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:24.080731 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:24.080742 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:24.080753 | orchestrator | 2026-04-06 05:08:24.080765 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:08:24.080778 | orchestrator | Monday 06 April 2026 05:08:21 +0000 (0:00:00.729) 0:00:51.653 ********** 2026-04-06 05:08:24.080798 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:08:24.080816 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:08:24.080834 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:08:24.080852 | orchestrator | 2026-04-06 05:08:24.080870 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:08:24.080888 | orchestrator | Monday 06 April 2026 05:08:23 +0000 (0:00:01.204) 0:00:52.857 ********** 2026-04-06 05:08:24.080905 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:24.080923 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:24.080942 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:24.080960 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:24.080979 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:24.081024 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:24.081041 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:24.081052 | orchestrator | 2026-04-06 05:08:24.081063 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:08:24.081084 | orchestrator | Monday 06 April 2026 05:08:24 +0000 (0:00:00.930) 0:00:53.788 ********** 2026-04-06 05:08:35.917816 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:08:35.917960 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:08:35.917989 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:08:35.918159 | orchestrator | 2026-04-06 05:08:35.918183 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:08:35.918202 | orchestrator | Monday 06 April 2026 05:08:26 +0000 (0:00:02.262) 0:00:56.051 ********** 2026-04-06 05:08:35.918220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 05:08:35.918247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 05:08:35.918259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 05:08:35.918297 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:35.918309 | orchestrator | 2026-04-06 05:08:35.918321 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:08:35.918332 | orchestrator | Monday 06 April 2026 05:08:26 +0000 (0:00:00.463) 0:00:56.514 ********** 2026-04-06 05:08:35.918345 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:08:35.918361 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:08:35.918375 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:08:35.918389 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:35.918403 | orchestrator | 2026-04-06 05:08:35.918417 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:08:35.918434 | orchestrator | Monday 06 April 2026 05:08:27 +0000 (0:00:00.842) 0:00:57.357 ********** 2026-04-06 05:08:35.918457 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:35.918479 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:35.918499 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:35.918517 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:35.918537 | orchestrator | 2026-04-06 05:08:35.918556 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:08:35.918574 | orchestrator | Monday 06 April 2026 05:08:27 +0000 (0:00:00.163) 0:00:57.521 ********** 2026-04-06 05:08:35.918595 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7ab3f7ebb0fe', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:08:24.751852', 'end': '2026-04-06 05:08:24.809074', 'delta': '0:00:00.057222', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7ab3f7ebb0fe'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:08:35.918655 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '46d5ea15fe96', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:08:25.342400', 'end': '2026-04-06 05:08:25.391482', 'delta': '0:00:00.049082', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['46d5ea15fe96'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:08:35.918690 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a87eea657fd7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:08:26.124981', 'end': '2026-04-06 05:08:26.169150', 'delta': '0:00:00.044169', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a87eea657fd7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:08:35.918710 | orchestrator | 2026-04-06 05:08:35.918728 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:08:35.918744 | orchestrator | Monday 06 April 2026 05:08:28 +0000 (0:00:00.225) 0:00:57.746 ********** 2026-04-06 05:08:35.918762 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:35.918779 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:35.918796 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:35.918814 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:35.918831 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:35.918851 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:35.918869 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:35.918889 | orchestrator | 2026-04-06 05:08:35.918901 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:08:35.918912 | orchestrator | Monday 06 April 2026 05:08:29 +0000 (0:00:01.244) 0:00:58.991 ********** 2026-04-06 05:08:35.918923 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:35.918934 | orchestrator | 2026-04-06 05:08:35.918945 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:08:35.918955 | orchestrator | Monday 06 April 2026 05:08:29 +0000 (0:00:00.248) 0:00:59.239 ********** 2026-04-06 05:08:35.918966 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:35.918977 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:35.918987 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:35.919027 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:35.919040 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:35.919051 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:35.919062 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:35.919073 | orchestrator | 2026-04-06 05:08:35.919084 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:08:35.919095 | orchestrator | Monday 06 April 2026 05:08:30 +0000 (0:00:01.011) 0:01:00.250 ********** 2026-04-06 05:08:35.919106 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:35.919117 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:08:35.919128 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:08:35.919139 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:08:35.919149 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:08:35.919160 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:08:35.919171 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-06 05:08:35.919182 | orchestrator | 2026-04-06 05:08:35.919193 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:08:35.919204 | orchestrator | Monday 06 April 2026 05:08:33 +0000 (0:00:03.333) 0:01:03.584 ********** 2026-04-06 05:08:35.919224 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:35.919235 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:35.919246 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:35.919257 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:35.919268 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:35.919279 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:35.919290 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:35.919301 | orchestrator | 2026-04-06 05:08:35.919312 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:08:35.919324 | orchestrator | Monday 06 April 2026 05:08:34 +0000 (0:00:01.026) 0:01:04.610 ********** 2026-04-06 05:08:35.919335 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:35.919346 | orchestrator | 2026-04-06 05:08:35.919357 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:08:35.919368 | orchestrator | Monday 06 April 2026 05:08:35 +0000 (0:00:00.145) 0:01:04.756 ********** 2026-04-06 05:08:35.919379 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:35.919390 | orchestrator | 2026-04-06 05:08:35.919401 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:08:35.919412 | orchestrator | Monday 06 April 2026 05:08:35 +0000 (0:00:00.224) 0:01:04.981 ********** 2026-04-06 05:08:35.919422 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:35.919433 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:35.919444 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:35.919455 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:35.919466 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:35.919487 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:42.044090 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:42.044202 | orchestrator | 2026-04-06 05:08:42.044219 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:08:42.044233 | orchestrator | Monday 06 April 2026 05:08:36 +0000 (0:00:01.105) 0:01:06.087 ********** 2026-04-06 05:08:42.044244 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:42.044255 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:42.044266 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:42.044277 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:42.044288 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:42.044299 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:42.044310 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:42.044322 | orchestrator | 2026-04-06 05:08:42.044349 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:08:42.044360 | orchestrator | Monday 06 April 2026 05:08:37 +0000 (0:00:01.017) 0:01:07.104 ********** 2026-04-06 05:08:42.044371 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:42.044382 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:42.044393 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:42.044404 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:42.044415 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:42.044426 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:42.044437 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:42.044447 | orchestrator | 2026-04-06 05:08:42.044459 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:08:42.044469 | orchestrator | Monday 06 April 2026 05:08:38 +0000 (0:00:01.081) 0:01:08.185 ********** 2026-04-06 05:08:42.044480 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:42.044491 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:42.044502 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:42.044513 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:42.044524 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:42.044534 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:42.044545 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:42.044556 | orchestrator | 2026-04-06 05:08:42.044567 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:08:42.044600 | orchestrator | Monday 06 April 2026 05:08:39 +0000 (0:00:00.744) 0:01:08.930 ********** 2026-04-06 05:08:42.044611 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:42.044622 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:42.044633 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:42.044644 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:42.044655 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:42.044665 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:42.044676 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:42.044687 | orchestrator | 2026-04-06 05:08:42.044698 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:08:42.044709 | orchestrator | Monday 06 April 2026 05:08:40 +0000 (0:00:00.958) 0:01:09.888 ********** 2026-04-06 05:08:42.044720 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:42.044730 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:42.044741 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:42.044752 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:42.044762 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:42.044773 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:42.044784 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:42.044795 | orchestrator | 2026-04-06 05:08:42.044806 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:08:42.044817 | orchestrator | Monday 06 April 2026 05:08:40 +0000 (0:00:00.737) 0:01:10.626 ********** 2026-04-06 05:08:42.044828 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:42.044838 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:42.044849 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:42.044860 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:42.044871 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:42.044881 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:42.044892 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:42.044903 | orchestrator | 2026-04-06 05:08:42.044914 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:08:42.044925 | orchestrator | Monday 06 April 2026 05:08:41 +0000 (0:00:00.986) 0:01:11.613 ********** 2026-04-06 05:08:42.044938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.044954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.044966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.045021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:08:42.045046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.045063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.045083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.045106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23f8d4f9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:08:42.045143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.300737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.300834 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:42.300851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.300864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.300876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.300889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:08:42.300904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.300915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.300927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.300970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a48c2299', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:08:42.301064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.301079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.301091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.301102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.301114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.301149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:08:42.503115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.503212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.503227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.503243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a86fd0c9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:08:42.503283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.503328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.503342 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:42.503356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.503369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'uuids': ['568ee26d-bc52-45e1-a610-bd1b65a33bb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS']}})  2026-04-06 05:08:42.503382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71f71275', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:08:42.503395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33']}})  2026-04-06 05:08:42.503407 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:42.503418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.503438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.503462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:08:42.608896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.608989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3', 'dm-uuid-CRYPT-LUKS2-9b11f78520334917a26820c7a917e496-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:08:42.609055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.609068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'uuids': ['9b11f785-2033-4917-a268-20c7a917e496'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3']}})  2026-04-06 05:08:42.609080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c']}})  2026-04-06 05:08:42.609121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.609168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d494db8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:08:42.609183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.609194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.609205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'uuids': ['83378823-14d2-4928-9007-67488abc99a7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp']}})  2026-04-06 05:08:42.609226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.609242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a868051', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:08:42.609260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS', 'dm-uuid-CRYPT-LUKS2-568ee26dbc5245e1a610bd1b65a33bb1-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:08:42.714437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3']}})  2026-04-06 05:08:42.714542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.714561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.714574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:08:42.714613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.714625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO', 'dm-uuid-CRYPT-LUKS2-dd6ed06a0d554d6181a429bf5c5222d7-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:08:42.714638 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:42.714666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.714727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'uuids': ['dd6ed06a-0d55-4d61-81a4-29bf5c5222d7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO']}})  2026-04-06 05:08:42.714743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a']}})  2026-04-06 05:08:42.714755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.714786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '40f67feb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:08:42.714811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.879292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.879375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp', 'dm-uuid-CRYPT-LUKS2-8337882314d24928900767488abc99a7-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:08:42.879387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.879413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'uuids': ['22ded8c8-9142-404c-a572-856e0a8f4fba'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP']}})  2026-04-06 05:08:42.879422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd180ec14', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:08:42.879440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447']}})  2026-04-06 05:08:42.879447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.879468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.879475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:08:42.879482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.879494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt', 'dm-uuid-CRYPT-LUKS2-0cb92a9095ac4932ba9885def0a3f871-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:08:42.879500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:42.879507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'uuids': ['0cb92a90-95ac-4932-ba98-85def0a3f871'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt']}})  2026-04-06 05:08:42.879517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742']}})  2026-04-06 05:08:42.879529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.037763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd99642af', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:08:43.037884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.037901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.037914 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:43.037942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP', 'dm-uuid-CRYPT-LUKS2-22ded8c89142404ca572856e0a8f4fba-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:08:43.037956 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:43.037968 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.038105 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.038122 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.038143 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-40-11-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:08:43.038155 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.038167 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.038178 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.038292 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c', 'scsi-SQEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '14d1d2ef', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:08:43.284569 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.284697 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:08:43.284728 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:43.284750 | orchestrator | 2026-04-06 05:08:43.284770 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:08:43.284789 | orchestrator | Monday 06 April 2026 05:08:43 +0000 (0:00:01.259) 0:01:12.873 ********** 2026-04-06 05:08:43.284812 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.284834 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.284873 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.284896 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.284973 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.284987 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.285052 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.285079 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23f8d4f9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.285117 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716501 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716605 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:08:43.716623 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716636 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716663 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716676 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716710 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716740 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716752 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716782 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a48c2299', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716807 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:43.716828 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082532 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:08:44.082635 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082653 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082683 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082697 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082730 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082742 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082772 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082794 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a86fd0c9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082817 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082829 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.082840 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:08:44.082861 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.234948 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'uuids': ['568ee26d-bc52-45e1-a610-bd1b65a33bb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.235097 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71f71275', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.235132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.235144 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.235152 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.235175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.235188 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.235200 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3', 'dm-uuid-CRYPT-LUKS2-9b11f78520334917a26820c7a917e496-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.235208 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.235216 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'uuids': ['9b11f785-2033-4917-a268-20c7a917e496'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.235228 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.311209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.311338 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'uuids': ['83378823-14d2-4928-9007-67488abc99a7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.311351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.311376 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d494db8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.311390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a868051', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.311402 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.311408 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.311415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.311427 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.389132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS', 'dm-uuid-CRYPT-LUKS2-568ee26dbc5245e1a610bd1b65a33bb1-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.389281 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.389299 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.389311 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.389323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO', 'dm-uuid-CRYPT-LUKS2-dd6ed06a0d554d6181a429bf5c5222d7-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.389354 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.389373 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'uuids': ['dd6ed06a-0d55-4d61-81a4-29bf5c5222d7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.389395 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.389412 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.389435 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '40f67feb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525361 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525376 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525389 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'uuids': ['22ded8c8-9142-404c-a572-856e0a8f4fba'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525404 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp', 'dm-uuid-CRYPT-LUKS2-8337882314d24928900767488abc99a7-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525487 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd180ec14', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525504 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525520 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525532 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525563 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.525588 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt', 'dm-uuid-CRYPT-LUKS2-0cb92a9095ac4932ba9885def0a3f871-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.648686 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.648797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'uuids': ['0cb92a90-95ac-4932-ba98-85def0a3f871'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.648824 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:08:44.648839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.648876 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:08:44.648888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.648932 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd99642af', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.648946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.648957 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.648975 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.648991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP', 'dm-uuid-CRYPT-LUKS2-22ded8c89142404ca572856e0a8f4fba-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:44.649056 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:48.212634 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:08:48.212747 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:48.212768 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-40-11-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:48.212781 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:48.212818 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:48.212845 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:48.212882 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c', 'scsi-SQEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '14d1d2ef', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_14d1d2ef-4ebf-4498-8f44-8e84ff37ee7c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:48.212897 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:48.212918 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:08:48.212930 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:48.212942 | orchestrator | 2026-04-06 05:08:48.212954 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:08:48.212966 | orchestrator | Monday 06 April 2026 05:08:44 +0000 (0:00:01.648) 0:01:14.521 ********** 2026-04-06 05:08:48.212977 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:48.212989 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:48.213063 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:48.213075 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:48.213085 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:48.213096 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:48.213113 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:48.213124 | orchestrator | 2026-04-06 05:08:48.213135 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:08:48.213149 | orchestrator | Monday 06 April 2026 05:08:46 +0000 (0:00:01.319) 0:01:15.841 ********** 2026-04-06 05:08:48.213162 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:48.213175 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:48.213189 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:48.213201 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:48.213213 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:48.213226 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:48.213239 | orchestrator | ok: [testbed-manager] 2026-04-06 05:08:48.213251 | orchestrator | 2026-04-06 05:08:48.213264 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:08:48.213277 | orchestrator | Monday 06 April 2026 05:08:46 +0000 (0:00:00.787) 0:01:16.628 ********** 2026-04-06 05:08:48.213290 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:08:48.213302 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:08:48.213315 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:08:48.213328 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:08:48.213341 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:08:48.213353 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:08:48.213366 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:08:48.213378 | orchestrator | 2026-04-06 05:08:48.213391 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:08:48.213412 | orchestrator | Monday 06 April 2026 05:08:48 +0000 (0:00:01.296) 0:01:17.925 ********** 2026-04-06 05:09:00.393451 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:00.393572 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:00.393594 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:00.393611 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:00.393627 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:00.393643 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:00.393659 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:00.393709 | orchestrator | 2026-04-06 05:09:00.393726 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:09:00.393743 | orchestrator | Monday 06 April 2026 05:08:48 +0000 (0:00:00.754) 0:01:18.679 ********** 2026-04-06 05:09:00.393760 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:00.393777 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:00.393852 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:00.393871 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:00.393886 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:00.393902 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:00.393917 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-04-06 05:09:00.393933 | orchestrator | 2026-04-06 05:09:00.393949 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:09:00.393966 | orchestrator | Monday 06 April 2026 05:08:50 +0000 (0:00:01.528) 0:01:20.208 ********** 2026-04-06 05:09:00.393984 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:00.394094 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:00.394114 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:00.394132 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:00.394151 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:00.394168 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:00.394185 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:00.394201 | orchestrator | 2026-04-06 05:09:00.394219 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:09:00.394237 | orchestrator | Monday 06 April 2026 05:08:51 +0000 (0:00:00.780) 0:01:20.989 ********** 2026-04-06 05:09:00.394255 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:09:00.394274 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-06 05:09:00.394290 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-06 05:09:00.394306 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-06 05:09:00.394321 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:09:00.394338 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-06 05:09:00.394354 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-06 05:09:00.394371 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-06 05:09:00.394388 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-06 05:09:00.394403 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-06 05:09:00.394420 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-06 05:09:00.394436 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:09:00.394452 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-06 05:09:00.394467 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-06 05:09:00.394483 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-06 05:09:00.394498 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-06 05:09:00.394513 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-06 05:09:00.394529 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-06 05:09:00.394545 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-06 05:09:00.394560 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-06 05:09:00.394577 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-06 05:09:00.394592 | orchestrator | 2026-04-06 05:09:00.394607 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:09:00.394623 | orchestrator | Monday 06 April 2026 05:08:53 +0000 (0:00:01.907) 0:01:22.897 ********** 2026-04-06 05:09:00.394638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 05:09:00.394654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 05:09:00.394669 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 05:09:00.394686 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:00.394719 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 05:09:00.394729 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 05:09:00.394738 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 05:09:00.394761 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:00.394770 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 05:09:00.394779 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 05:09:00.394787 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 05:09:00.394796 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:00.394805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 05:09:00.394814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 05:09:00.394822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 05:09:00.394831 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:00.394840 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-06 05:09:00.394848 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-06 05:09:00.394857 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-06 05:09:00.394866 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:00.394875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-06 05:09:00.394884 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-06 05:09:00.394893 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-06 05:09:00.394902 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:00.394933 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-06 05:09:00.394942 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-06 05:09:00.394951 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-06 05:09:00.394960 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:00.394969 | orchestrator | 2026-04-06 05:09:00.394977 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:09:00.394986 | orchestrator | Monday 06 April 2026 05:08:54 +0000 (0:00:00.893) 0:01:23.791 ********** 2026-04-06 05:09:00.395024 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:00.395038 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:00.395047 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:00.395055 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:00.395065 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:09:00.395074 | orchestrator | 2026-04-06 05:09:00.395083 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:09:00.395094 | orchestrator | Monday 06 April 2026 05:08:55 +0000 (0:00:01.301) 0:01:25.092 ********** 2026-04-06 05:09:00.395103 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:00.395112 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:00.395120 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:00.395129 | orchestrator | 2026-04-06 05:09:00.395138 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:09:00.395147 | orchestrator | Monday 06 April 2026 05:08:55 +0000 (0:00:00.349) 0:01:25.441 ********** 2026-04-06 05:09:00.395156 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:00.395164 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:00.395173 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:00.395182 | orchestrator | 2026-04-06 05:09:00.395191 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:09:00.395200 | orchestrator | Monday 06 April 2026 05:08:56 +0000 (0:00:00.622) 0:01:26.064 ********** 2026-04-06 05:09:00.395208 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:00.395216 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:00.395231 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:00.395239 | orchestrator | 2026-04-06 05:09:00.395247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:09:00.395255 | orchestrator | Monday 06 April 2026 05:08:56 +0000 (0:00:00.345) 0:01:26.409 ********** 2026-04-06 05:09:00.395263 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:09:00.395271 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:09:00.395279 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:09:00.395287 | orchestrator | 2026-04-06 05:09:00.395295 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:09:00.395303 | orchestrator | Monday 06 April 2026 05:08:57 +0000 (0:00:00.424) 0:01:26.833 ********** 2026-04-06 05:09:00.395311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:09:00.395319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:09:00.395327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:09:00.395335 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:00.395343 | orchestrator | 2026-04-06 05:09:00.395351 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:09:00.395359 | orchestrator | Monday 06 April 2026 05:08:57 +0000 (0:00:00.395) 0:01:27.229 ********** 2026-04-06 05:09:00.395366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:09:00.395389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:09:00.395397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:09:00.395413 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:00.395421 | orchestrator | 2026-04-06 05:09:00.395429 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:09:00.395437 | orchestrator | Monday 06 April 2026 05:08:57 +0000 (0:00:00.382) 0:01:27.612 ********** 2026-04-06 05:09:00.395445 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:09:00.395453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:09:00.395461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:09:00.395468 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:00.395476 | orchestrator | 2026-04-06 05:09:00.395484 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:09:00.395492 | orchestrator | Monday 06 April 2026 05:08:58 +0000 (0:00:00.709) 0:01:28.322 ********** 2026-04-06 05:09:00.395505 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:09:00.395513 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:09:00.395521 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:09:00.395529 | orchestrator | 2026-04-06 05:09:00.395537 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:09:00.395545 | orchestrator | Monday 06 April 2026 05:08:59 +0000 (0:00:00.679) 0:01:29.001 ********** 2026-04-06 05:09:00.395553 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-06 05:09:00.395560 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 05:09:00.395568 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-06 05:09:00.395576 | orchestrator | 2026-04-06 05:09:00.395584 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:09:00.395592 | orchestrator | Monday 06 April 2026 05:08:59 +0000 (0:00:00.553) 0:01:29.554 ********** 2026-04-06 05:09:00.395600 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:09:00.395608 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:09:00.395616 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:09:00.395624 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:09:00.395638 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:09:27.671822 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:09:27.671960 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:09:27.671978 | orchestrator | 2026-04-06 05:09:27.671991 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:09:27.672051 | orchestrator | Monday 06 April 2026 05:09:00 +0000 (0:00:00.847) 0:01:30.402 ********** 2026-04-06 05:09:27.672063 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:09:27.672074 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:09:27.672085 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:09:27.672096 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:09:27.672107 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:09:27.672118 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:09:27.672128 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:09:27.672139 | orchestrator | 2026-04-06 05:09:27.672150 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-04-06 05:09:27.672161 | orchestrator | Monday 06 April 2026 05:09:02 +0000 (0:00:02.228) 0:01:32.631 ********** 2026-04-06 05:09:27.672171 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:09:27.672183 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:09:27.672194 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:09:27.672205 | orchestrator | changed: [testbed-manager] 2026-04-06 05:09:27.672215 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:09:27.672226 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:09:27.672237 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:09:27.672247 | orchestrator | 2026-04-06 05:09:27.672258 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-04-06 05:09:27.672269 | orchestrator | Monday 06 April 2026 05:09:10 +0000 (0:00:07.505) 0:01:40.136 ********** 2026-04-06 05:09:27.672280 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.672290 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.672301 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.672312 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.672322 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.672333 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.672344 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.672358 | orchestrator | 2026-04-06 05:09:27.672372 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-04-06 05:09:27.672385 | orchestrator | Monday 06 April 2026 05:09:11 +0000 (0:00:00.956) 0:01:41.093 ********** 2026-04-06 05:09:27.672398 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.672412 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.672424 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.672438 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.672450 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.672463 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.672475 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.672488 | orchestrator | 2026-04-06 05:09:27.672501 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-04-06 05:09:27.672514 | orchestrator | Monday 06 April 2026 05:09:12 +0000 (0:00:00.752) 0:01:41.845 ********** 2026-04-06 05:09:27.672527 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.672540 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:09:27.672554 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:09:27.672566 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:09:27.672579 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:09:27.672592 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:09:27.672605 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:09:27.672627 | orchestrator | 2026-04-06 05:09:27.672641 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-04-06 05:09:27.672655 | orchestrator | Monday 06 April 2026 05:09:14 +0000 (0:00:02.335) 0:01:44.181 ********** 2026-04-06 05:09:27.672669 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-06 05:09:27.672684 | orchestrator | 2026-04-06 05:09:27.672698 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-04-06 05:09:27.672724 | orchestrator | Monday 06 April 2026 05:09:16 +0000 (0:00:01.991) 0:01:46.172 ********** 2026-04-06 05:09:27.672736 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.672746 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.672757 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.672768 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.672779 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.672790 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.672801 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.672811 | orchestrator | 2026-04-06 05:09:27.672822 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-04-06 05:09:27.672833 | orchestrator | Monday 06 April 2026 05:09:17 +0000 (0:00:01.015) 0:01:47.187 ********** 2026-04-06 05:09:27.672844 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.672899 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.672910 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.672921 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.672932 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.672942 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.672953 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.672964 | orchestrator | 2026-04-06 05:09:27.672975 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-04-06 05:09:27.672986 | orchestrator | Monday 06 April 2026 05:09:18 +0000 (0:00:01.041) 0:01:48.229 ********** 2026-04-06 05:09:27.673014 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.673044 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.673056 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.673067 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.673077 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.673088 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.673099 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.673110 | orchestrator | 2026-04-06 05:09:27.673121 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-04-06 05:09:27.673131 | orchestrator | Monday 06 April 2026 05:09:19 +0000 (0:00:00.839) 0:01:49.068 ********** 2026-04-06 05:09:27.673142 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.673153 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.673163 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.673174 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.673185 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.673195 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.673206 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.673217 | orchestrator | 2026-04-06 05:09:27.673227 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-04-06 05:09:27.673238 | orchestrator | Monday 06 April 2026 05:09:20 +0000 (0:00:01.006) 0:01:50.075 ********** 2026-04-06 05:09:27.673249 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.673260 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.673274 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.673292 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.673309 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.673326 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.673344 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.673361 | orchestrator | 2026-04-06 05:09:27.673391 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-04-06 05:09:27.673410 | orchestrator | Monday 06 April 2026 05:09:21 +0000 (0:00:00.785) 0:01:50.860 ********** 2026-04-06 05:09:27.673429 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.673448 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.673473 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.673484 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.673495 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.673506 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.673518 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.673529 | orchestrator | 2026-04-06 05:09:27.673541 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-04-06 05:09:27.673552 | orchestrator | Monday 06 April 2026 05:09:22 +0000 (0:00:01.035) 0:01:51.896 ********** 2026-04-06 05:09:27.673563 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.673574 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.673585 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.673595 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.673606 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.673617 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.673628 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.673638 | orchestrator | 2026-04-06 05:09:27.673649 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-04-06 05:09:27.673660 | orchestrator | Monday 06 April 2026 05:09:22 +0000 (0:00:00.757) 0:01:52.654 ********** 2026-04-06 05:09:27.673671 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.673682 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.673693 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.673704 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.673714 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.673725 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.673736 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.673747 | orchestrator | 2026-04-06 05:09:27.673758 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-04-06 05:09:27.673769 | orchestrator | Monday 06 April 2026 05:09:23 +0000 (0:00:01.053) 0:01:53.708 ********** 2026-04-06 05:09:27.673786 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.673805 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.673822 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.673839 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.673857 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.673877 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.673896 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.673912 | orchestrator | 2026-04-06 05:09:27.673923 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-04-06 05:09:27.673934 | orchestrator | Monday 06 April 2026 05:09:24 +0000 (0:00:00.845) 0:01:54.554 ********** 2026-04-06 05:09:27.673944 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.673955 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.673966 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.673976 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.674064 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.674080 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.674091 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.674102 | orchestrator | 2026-04-06 05:09:27.674113 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-04-06 05:09:27.674124 | orchestrator | Monday 06 April 2026 05:09:25 +0000 (0:00:01.026) 0:01:55.580 ********** 2026-04-06 05:09:27.674135 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.674146 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.674157 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.674168 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.674188 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.674199 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.674209 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.674220 | orchestrator | 2026-04-06 05:09:27.674231 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-04-06 05:09:27.674242 | orchestrator | Monday 06 April 2026 05:09:26 +0000 (0:00:01.041) 0:01:56.622 ********** 2026-04-06 05:09:27.674253 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:27.674264 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:27.674275 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:27.674286 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:27.674296 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:27.674307 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:27.674318 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:27.674329 | orchestrator | 2026-04-06 05:09:27.674352 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-04-06 05:09:37.056964 | orchestrator | Monday 06 April 2026 05:09:27 +0000 (0:00:00.757) 0:01:57.379 ********** 2026-04-06 05:09:37.057142 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:37.057159 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:37.057171 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:37.057183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 05:09:37.057196 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 05:09:37.057207 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 05:09:37.057218 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 05:09:37.057229 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:37.057240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 05:09:37.057251 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 05:09:37.057262 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:37.057272 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:37.057283 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:37.057294 | orchestrator | 2026-04-06 05:09:37.057306 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-04-06 05:09:37.057317 | orchestrator | Monday 06 April 2026 05:09:28 +0000 (0:00:01.053) 0:01:58.433 ********** 2026-04-06 05:09:37.057329 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:37.057339 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:37.057350 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:37.057361 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:37.057372 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:37.057383 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:37.057393 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:37.057405 | orchestrator | 2026-04-06 05:09:37.057417 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-04-06 05:09:37.057428 | orchestrator | Monday 06 April 2026 05:09:29 +0000 (0:00:00.860) 0:01:59.293 ********** 2026-04-06 05:09:37.057439 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:37.057450 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:37.057460 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:37.057471 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:37.057482 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:37.057515 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:37.057528 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:37.057541 | orchestrator | 2026-04-06 05:09:37.057555 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-04-06 05:09:37.057569 | orchestrator | Monday 06 April 2026 05:09:30 +0000 (0:00:01.037) 0:02:00.331 ********** 2026-04-06 05:09:37.057581 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:37.057594 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:37.057607 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:37.057620 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:37.057634 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:37.057647 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:37.057660 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:37.057672 | orchestrator | 2026-04-06 05:09:37.057685 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-04-06 05:09:37.057700 | orchestrator | Monday 06 April 2026 05:09:31 +0000 (0:00:00.789) 0:02:01.120 ********** 2026-04-06 05:09:37.057713 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:37.057727 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:37.057740 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:37.057753 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:37.057766 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:37.057779 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:37.057792 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:37.057818 | orchestrator | 2026-04-06 05:09:37.057833 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-04-06 05:09:37.057846 | orchestrator | Monday 06 April 2026 05:09:32 +0000 (0:00:01.101) 0:02:02.222 ********** 2026-04-06 05:09:37.057859 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:37.057871 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:37.057881 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:37.057892 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:37.057903 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:37.057914 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:37.057925 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:37.057935 | orchestrator | 2026-04-06 05:09:37.057946 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-04-06 05:09:37.057957 | orchestrator | Monday 06 April 2026 05:09:33 +0000 (0:00:00.772) 0:02:02.994 ********** 2026-04-06 05:09:37.057968 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:37.057978 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:37.057989 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:37.058070 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:37.058082 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:37.058093 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:37.058104 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:37.058115 | orchestrator | 2026-04-06 05:09:37.058125 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-04-06 05:09:37.058148 | orchestrator | Monday 06 April 2026 05:09:34 +0000 (0:00:01.030) 0:02:04.025 ********** 2026-04-06 05:09:37.058178 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:37.058190 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:37.058201 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:37.058211 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:37.058223 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:09:37.058234 | orchestrator | 2026-04-06 05:09:37.058245 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-04-06 05:09:37.058256 | orchestrator | Monday 06 April 2026 05:09:35 +0000 (0:00:01.569) 0:02:05.595 ********** 2026-04-06 05:09:37.058267 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:09:37.058279 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:09:37.058299 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:09:37.058309 | orchestrator | 2026-04-06 05:09:37.058321 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-04-06 05:09:37.058331 | orchestrator | Monday 06 April 2026 05:09:36 +0000 (0:00:00.373) 0:02:05.969 ********** 2026-04-06 05:09:37.058343 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 05:09:37.058354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 05:09:37.058364 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:37.058375 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 05:09:37.058387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 05:09:37.058397 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:37.058408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 05:09:37.058419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 05:09:37.058430 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:37.058441 | orchestrator | 2026-04-06 05:09:37.058452 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-04-06 05:09:37.058463 | orchestrator | Monday 06 April 2026 05:09:36 +0000 (0:00:00.417) 0:02:06.386 ********** 2026-04-06 05:09:37.058476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:37.058490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:37.058501 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:37.058513 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:37.058530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:37.058541 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:37.058553 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:37.058564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:37.058583 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:37.058594 | orchestrator | 2026-04-06 05:09:37.058610 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-04-06 05:09:40.275503 | orchestrator | Monday 06 April 2026 05:09:37 +0000 (0:00:00.385) 0:02:06.771 ********** 2026-04-06 05:09:40.275603 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:40.275618 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:40.275630 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:40.275641 | orchestrator | 2026-04-06 05:09:40.275653 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-04-06 05:09:40.275664 | orchestrator | Monday 06 April 2026 05:09:37 +0000 (0:00:00.574) 0:02:07.346 ********** 2026-04-06 05:09:40.275675 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:40.275685 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:40.275696 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:40.275707 | orchestrator | 2026-04-06 05:09:40.275718 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-04-06 05:09:40.275729 | orchestrator | Monday 06 April 2026 05:09:37 +0000 (0:00:00.364) 0:02:07.710 ********** 2026-04-06 05:09:40.275740 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:40.275750 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:40.275761 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:40.275772 | orchestrator | 2026-04-06 05:09:40.275783 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-04-06 05:09:40.275793 | orchestrator | Monday 06 April 2026 05:09:38 +0000 (0:00:00.322) 0:02:08.032 ********** 2026-04-06 05:09:40.275804 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:40.275815 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:40.275825 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:40.275836 | orchestrator | 2026-04-06 05:09:40.275847 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-04-06 05:09:40.275857 | orchestrator | Monday 06 April 2026 05:09:38 +0000 (0:00:00.294) 0:02:08.327 ********** 2026-04-06 05:09:40.275868 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'}) 2026-04-06 05:09:40.275880 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}) 2026-04-06 05:09:40.275891 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'}) 2026-04-06 05:09:40.275902 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'}) 2026-04-06 05:09:40.275913 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}) 2026-04-06 05:09:40.275923 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'}) 2026-04-06 05:09:40.275934 | orchestrator | 2026-04-06 05:09:40.275945 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-04-06 05:09:40.275956 | orchestrator | Monday 06 April 2026 05:09:39 +0000 (0:00:01.365) 0:02:09.692 ********** 2026-04-06 05:09:40.275991 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33/osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1775444398.269193, 'mtime': 1775444398.2641928, 'ctime': 1775444398.2641928, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33/osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:40.276083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c/osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1775444417.6554937, 'mtime': 1775444417.6504936, 'ctime': 1775444417.6504936, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c/osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:40.276100 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:40.276116 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3/osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1775444401.4703133, 'mtime': 1775444401.4653132, 'ctime': 1775444401.4653132, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3/osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:40.276137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a/osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1775444420.2496014, 'mtime': 1775444420.2456012, 'ctime': 1775444420.2456012, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a/osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:40.276158 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:40.276181 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447/osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1775444401.411512, 'mtime': 1775444401.4055119, 'ctime': 1775444401.4055119, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447/osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.005284 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-4d79f264-f564-5244-b3d4-1e30cd615742/osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1775444419.67779, 'mtime': 1775444419.67479, 'ctime': 1775444419.67479, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-4d79f264-f564-5244-b3d4-1e30cd615742/osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.005389 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:42.005408 | orchestrator | 2026-04-06 05:09:42.005422 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-04-06 05:09:42.005434 | orchestrator | Monday 06 April 2026 05:09:40 +0000 (0:00:00.413) 0:02:10.105 ********** 2026-04-06 05:09:42.005447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 05:09:42.005485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 05:09:42.005497 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:42.005508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 05:09:42.005519 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 05:09:42.005530 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:42.005556 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 05:09:42.005567 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 05:09:42.005578 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:42.005589 | orchestrator | 2026-04-06 05:09:42.005601 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-04-06 05:09:42.005612 | orchestrator | Monday 06 April 2026 05:09:40 +0000 (0:00:00.373) 0:02:10.479 ********** 2026-04-06 05:09:42.005625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.005639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.005651 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:42.005662 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.005691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.005703 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:42.005714 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.005725 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.005736 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:42.005748 | orchestrator | 2026-04-06 05:09:42.005759 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-04-06 05:09:42.005770 | orchestrator | Monday 06 April 2026 05:09:41 +0000 (0:00:00.359) 0:02:10.838 ********** 2026-04-06 05:09:42.005782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'})  2026-04-06 05:09:42.005800 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'})  2026-04-06 05:09:42.005815 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:42.005828 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'})  2026-04-06 05:09:42.005841 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'})  2026-04-06 05:09:42.005854 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:42.005867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'})  2026-04-06 05:09:42.005880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'})  2026-04-06 05:09:42.005893 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:42.005904 | orchestrator | 2026-04-06 05:09:42.005916 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-04-06 05:09:42.005927 | orchestrator | Monday 06 April 2026 05:09:41 +0000 (0:00:00.607) 0:02:11.446 ********** 2026-04-06 05:09:42.005951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-44d7a625-0d29-5597-9a0c-b91ce06f2e33', 'data_vg': 'ceph-44d7a625-0d29-5597-9a0c-b91ce06f2e33'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.005963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-33ff4195-b9ae-565c-9501-f62265c8cf2c', 'data_vg': 'ceph-33ff4195-b9ae-565c-9501-f62265c8cf2c'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.005974 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:42.005986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3', 'data_vg': 'ceph-c3bdc13a-4e4a-504e-9e7c-ad28314ab8c3'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.006079 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8c307d7c-3927-5061-a8a8-155bb148bb1a', 'data_vg': 'ceph-8c307d7c-3927-5061-a8a8-155bb148bb1a'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.006093 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:42.006104 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-fcd584d6-c8ff-5eaf-81cc-26105cfb5447', 'data_vg': 'ceph-fcd584d6-c8ff-5eaf-81cc-26105cfb5447'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:42.006124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-4d79f264-f564-5244-b3d4-1e30cd615742', 'data_vg': 'ceph-4d79f264-f564-5244-b3d4-1e30cd615742'}, 'ansible_loop_var': 'item'})  2026-04-06 05:09:46.494205 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:46.494305 | orchestrator | 2026-04-06 05:09:46.494321 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-04-06 05:09:46.494334 | orchestrator | Monday 06 April 2026 05:09:42 +0000 (0:00:00.366) 0:02:11.812 ********** 2026-04-06 05:09:46.494366 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:46.494376 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:46.494386 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:46.494396 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:46.494406 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:46.494415 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:46.494425 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:46.494439 | orchestrator | 2026-04-06 05:09:46.494456 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-04-06 05:09:46.494491 | orchestrator | Monday 06 April 2026 05:09:42 +0000 (0:00:00.817) 0:02:12.630 ********** 2026-04-06 05:09:46.494506 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:46.494535 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:46.494552 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:46.494569 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:46.494586 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:09:46.494598 | orchestrator | 2026-04-06 05:09:46.494608 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-04-06 05:09:46.494618 | orchestrator | Monday 06 April 2026 05:09:44 +0000 (0:00:01.709) 0:02:14.339 ********** 2026-04-06 05:09:46.494629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494705 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:46.494717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494798 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:46.494814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.494917 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:46.494936 | orchestrator | 2026-04-06 05:09:46.494954 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-04-06 05:09:46.494970 | orchestrator | Monday 06 April 2026 05:09:45 +0000 (0:00:00.439) 0:02:14.778 ********** 2026-04-06 05:09:46.494983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495120 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:46.495131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495185 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:46.495196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495278 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:46.495288 | orchestrator | 2026-04-06 05:09:46.495299 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-04-06 05:09:46.495310 | orchestrator | Monday 06 April 2026 05:09:45 +0000 (0:00:00.671) 0:02:15.450 ********** 2026-04-06 05:09:46.495321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495391 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:46.495402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495457 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:46.495468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:09:46.495521 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:46.495532 | orchestrator | 2026-04-06 05:09:46.495543 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-04-06 05:09:46.495554 | orchestrator | Monday 06 April 2026 05:09:46 +0000 (0:00:00.438) 0:02:15.888 ********** 2026-04-06 05:09:46.495565 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:46.495576 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:46.495607 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:53.322714 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:53.322832 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:53.322847 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:53.322858 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:53.322870 | orchestrator | 2026-04-06 05:09:53.322882 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-04-06 05:09:53.322894 | orchestrator | Monday 06 April 2026 05:09:46 +0000 (0:00:00.752) 0:02:16.641 ********** 2026-04-06 05:09:53.322906 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:53.322916 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:53.322928 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:53.322939 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:53.322949 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:53.322960 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:53.322971 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:53.322982 | orchestrator | 2026-04-06 05:09:53.323056 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-04-06 05:09:53.323069 | orchestrator | Monday 06 April 2026 05:09:47 +0000 (0:00:01.007) 0:02:17.648 ********** 2026-04-06 05:09:53.323080 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:53.323091 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:53.323102 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:53.323113 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:53.323124 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:53.323135 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:53.323146 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:53.323157 | orchestrator | 2026-04-06 05:09:53.323191 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-04-06 05:09:53.323203 | orchestrator | Monday 06 April 2026 05:09:48 +0000 (0:00:00.724) 0:02:18.372 ********** 2026-04-06 05:09:53.323214 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:53.323225 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:53.323238 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:53.323254 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:53.323273 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:53.323285 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:53.323296 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:53.323306 | orchestrator | 2026-04-06 05:09:53.323318 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-04-06 05:09:53.323330 | orchestrator | Monday 06 April 2026 05:09:49 +0000 (0:00:01.071) 0:02:19.444 ********** 2026-04-06 05:09:53.323341 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:53.323352 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:53.323363 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:53.323374 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:53.323384 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:53.323395 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:53.323406 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:53.323416 | orchestrator | 2026-04-06 05:09:53.323427 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-04-06 05:09:53.323438 | orchestrator | Monday 06 April 2026 05:09:50 +0000 (0:00:00.976) 0:02:20.421 ********** 2026-04-06 05:09:53.323463 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:53.323474 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:53.323485 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:53.323495 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:53.323506 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:53.323517 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:53.323527 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:53.323538 | orchestrator | 2026-04-06 05:09:53.323549 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-04-06 05:09:53.323560 | orchestrator | Monday 06 April 2026 05:09:51 +0000 (0:00:00.715) 0:02:21.136 ********** 2026-04-06 05:09:53.323571 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:53.323581 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:53.323592 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:53.323603 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:53.323613 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:53.323624 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:53.323635 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:53.323645 | orchestrator | 2026-04-06 05:09:53.323656 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-04-06 05:09:53.323667 | orchestrator | Monday 06 April 2026 05:09:52 +0000 (0:00:01.014) 0:02:22.151 ********** 2026-04-06 05:09:53.323679 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:53.323692 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:53.323705 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:53.323717 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:53.323729 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:53.323750 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:09:53.323761 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:53.323789 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:53.323801 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:53.323812 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:53.323822 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:53.323833 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:53.323845 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:09:53.323855 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:53.323866 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:53.323877 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:53.323888 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:53.323899 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:53.323915 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:09:53.323926 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:53.323937 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:53.323948 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:53.323959 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:53.323970 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:53.323980 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:53.323991 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:53.324032 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:53.324043 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:53.324054 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:53.324065 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:53.324093 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:09:55.415823 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:55.415955 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:55.415972 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:09:55.415984 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:55.416049 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:55.416065 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:55.416077 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:55.416090 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:55.416101 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:55.416112 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:55.416123 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:55.416134 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:55.416161 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:55.416173 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:09:55.416185 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:55.416196 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:55.416231 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:09:55.416243 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:55.416254 | orchestrator | 2026-04-06 05:09:55.416266 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-04-06 05:09:55.416278 | orchestrator | Monday 06 April 2026 05:09:53 +0000 (0:00:01.218) 0:02:23.369 ********** 2026-04-06 05:09:55.416289 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:55.416301 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:55.416312 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:55.416323 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:09:55.416333 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:09:55.416344 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:09:55.416355 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:09:55.416366 | orchestrator | 2026-04-06 05:09:55.416377 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-04-06 05:09:55.416388 | orchestrator | Monday 06 April 2026 05:09:54 +0000 (0:00:01.040) 0:02:24.409 ********** 2026-04-06 05:09:55.416399 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:55.416410 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:55.416421 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:55.416450 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:55.416461 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:55.416473 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:09:55.416484 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:09:55.416495 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:55.416505 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:55.416516 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:55.416527 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:55.416538 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:55.416549 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:09:55.416560 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:09:55.416587 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:55.416607 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:55.416618 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:55.416629 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:55.416640 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:55.416651 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:09:55.416662 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:09:55.416672 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:55.416684 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:55.416695 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:09:55.416745 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:09:55.416757 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:09:55.416768 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:09:55.416779 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:09:55.416798 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:10:10.026496 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:10:10.026594 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:10:10.026610 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:10:10.026624 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:10:10.026669 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-06 05:10:10.026683 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:10:10.026696 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-06 05:10:10.026728 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:10:10.026741 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:10:10.026753 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:10:10.026776 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:10:10.026787 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:10:10.026798 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:10:10.026810 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-06 05:10:10.026821 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:10:10.026831 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:10:10.026842 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-06 05:10:10.026853 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-06 05:10:10.026865 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-06 05:10:10.026875 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:10:10.026886 | orchestrator | 2026-04-06 05:10:10.026898 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-04-06 05:10:10.026910 | orchestrator | Monday 06 April 2026 05:09:55 +0000 (0:00:01.070) 0:02:25.479 ********** 2026-04-06 05:10:10.026920 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:10.026931 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:10:10.026942 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:10:10.026953 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:10:10.026964 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:10:10.026974 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:10:10.026985 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:10:10.027067 | orchestrator | 2026-04-06 05:10:10.027083 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-04-06 05:10:10.027096 | orchestrator | Monday 06 April 2026 05:09:56 +0000 (0:00:01.088) 0:02:26.568 ********** 2026-04-06 05:10:10.027110 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:10.027122 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:10:10.027135 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:10:10.027148 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:10:10.027160 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:10:10.027174 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:10:10.027187 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:10:10.027200 | orchestrator | 2026-04-06 05:10:10.027211 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-04-06 05:10:10.027247 | orchestrator | Monday 06 April 2026 05:09:57 +0000 (0:00:00.750) 0:02:27.318 ********** 2026-04-06 05:10:10.027259 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:10.027270 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:10:10.027281 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:10:10.027292 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:10:10.027303 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:10:10.027313 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:10:10.027324 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:10:10.027335 | orchestrator | 2026-04-06 05:10:10.027346 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-06 05:10:10.027357 | orchestrator | Monday 06 April 2026 05:09:59 +0000 (0:00:01.748) 0:02:29.066 ********** 2026-04-06 05:10:10.027368 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-06 05:10:10.027380 | orchestrator | 2026-04-06 05:10:10.027391 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-04-06 05:10:10.027401 | orchestrator | Monday 06 April 2026 05:10:01 +0000 (0:00:01.912) 0:02:30.979 ********** 2026-04-06 05:10:10.027412 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-06 05:10:10.027423 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-06 05:10:10.027434 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-06 05:10:10.027445 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-06 05:10:10.027455 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-06 05:10:10.027466 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-06 05:10:10.027477 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-06 05:10:10.027487 | orchestrator | 2026-04-06 05:10:10.027498 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-04-06 05:10:10.027509 | orchestrator | Monday 06 April 2026 05:10:02 +0000 (0:00:00.977) 0:02:31.956 ********** 2026-04-06 05:10:10.027520 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:10.027531 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:10:10.027542 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:10:10.027552 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:10:10.027563 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:10:10.027579 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:10:10.027590 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:10:10.027601 | orchestrator | 2026-04-06 05:10:10.027612 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-04-06 05:10:10.027623 | orchestrator | Monday 06 April 2026 05:10:03 +0000 (0:00:01.114) 0:02:33.071 ********** 2026-04-06 05:10:10.027634 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:10.027645 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:10:10.027656 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:10:10.027667 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:10:10.027678 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:10:10.027689 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:10:10.027699 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:10:10.027710 | orchestrator | 2026-04-06 05:10:10.027721 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-04-06 05:10:10.027732 | orchestrator | Monday 06 April 2026 05:10:04 +0000 (0:00:00.804) 0:02:33.876 ********** 2026-04-06 05:10:10.027743 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:10:10.027754 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:10.027771 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:10:10.027782 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:10:10.027792 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:10:10.027803 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:10:10.027814 | orchestrator | ok: [testbed-manager] 2026-04-06 05:10:10.027825 | orchestrator | 2026-04-06 05:10:10.027836 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-04-06 05:10:10.027847 | orchestrator | Monday 06 April 2026 05:10:05 +0000 (0:00:01.480) 0:02:35.356 ********** 2026-04-06 05:10:10.027858 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:10.027869 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:10:10.027879 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:10:10.027890 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:10:10.027901 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:10:10.027912 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:10:10.027922 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:10:10.027933 | orchestrator | 2026-04-06 05:10:10.027944 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-06 05:10:10.027955 | orchestrator | Monday 06 April 2026 05:10:06 +0000 (0:00:01.304) 0:02:36.661 ********** 2026-04-06 05:10:10.027966 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:10.027976 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:10:10.027987 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:10:10.028016 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:10:10.028028 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:10:10.028038 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:10:10.028049 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:10:10.028060 | orchestrator | 2026-04-06 05:10:10.028071 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-04-06 05:10:10.028081 | orchestrator | Monday 06 April 2026 05:10:08 +0000 (0:00:01.310) 0:02:37.972 ********** 2026-04-06 05:10:10.028092 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:10.028103 | orchestrator | 2026-04-06 05:10:10.028114 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-04-06 05:10:10.028125 | orchestrator | Monday 06 April 2026 05:10:09 +0000 (0:00:01.596) 0:02:39.568 ********** 2026-04-06 05:10:10.028136 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:10.028147 | orchestrator | 2026-04-06 05:10:10.028164 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-04-06 05:10:28.200144 | orchestrator | 2026-04-06 05:10:28.200257 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:10:28.200274 | orchestrator | Monday 06 April 2026 05:10:10 +0000 (0:00:00.723) 0:02:40.292 ********** 2026-04-06 05:10:28.200286 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.200298 | orchestrator | 2026-04-06 05:10:28.200310 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:10:28.200322 | orchestrator | Monday 06 April 2026 05:10:11 +0000 (0:00:00.443) 0:02:40.735 ********** 2026-04-06 05:10:28.200333 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.200343 | orchestrator | 2026-04-06 05:10:28.200354 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-04-06 05:10:28.200365 | orchestrator | Monday 06 April 2026 05:10:11 +0000 (0:00:00.361) 0:02:41.097 ********** 2026-04-06 05:10:28.200378 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-06 05:10:28.200391 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-06 05:10:28.200427 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-06 05:10:28.200453 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-06 05:10:28.200466 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-06 05:10:28.200478 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}])  2026-04-06 05:10:28.200491 | orchestrator | 2026-04-06 05:10:28.200502 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-06 05:10:28.200513 | orchestrator | 2026-04-06 05:10:28.200524 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-06 05:10:28.200534 | orchestrator | Monday 06 April 2026 05:10:20 +0000 (0:00:09.417) 0:02:50.515 ********** 2026-04-06 05:10:28.200545 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.200556 | orchestrator | 2026-04-06 05:10:28.200566 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-06 05:10:28.200577 | orchestrator | Monday 06 April 2026 05:10:21 +0000 (0:00:00.454) 0:02:50.969 ********** 2026-04-06 05:10:28.200588 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.200598 | orchestrator | 2026-04-06 05:10:28.200609 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-06 05:10:28.200620 | orchestrator | Monday 06 April 2026 05:10:21 +0000 (0:00:00.141) 0:02:51.111 ********** 2026-04-06 05:10:28.200633 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:28.200647 | orchestrator | 2026-04-06 05:10:28.200660 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-06 05:10:28.200673 | orchestrator | Monday 06 April 2026 05:10:21 +0000 (0:00:00.132) 0:02:51.243 ********** 2026-04-06 05:10:28.200685 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.200699 | orchestrator | 2026-04-06 05:10:28.200711 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:10:28.200724 | orchestrator | Monday 06 April 2026 05:10:21 +0000 (0:00:00.145) 0:02:51.389 ********** 2026-04-06 05:10:28.200737 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-06 05:10:28.200749 | orchestrator | 2026-04-06 05:10:28.200763 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:10:28.200791 | orchestrator | Monday 06 April 2026 05:10:21 +0000 (0:00:00.212) 0:02:51.602 ********** 2026-04-06 05:10:28.200803 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.200814 | orchestrator | 2026-04-06 05:10:28.200825 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:10:28.200836 | orchestrator | Monday 06 April 2026 05:10:22 +0000 (0:00:00.470) 0:02:52.073 ********** 2026-04-06 05:10:28.200854 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.200865 | orchestrator | 2026-04-06 05:10:28.200876 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:10:28.200887 | orchestrator | Monday 06 April 2026 05:10:22 +0000 (0:00:00.136) 0:02:52.210 ********** 2026-04-06 05:10:28.200897 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.200908 | orchestrator | 2026-04-06 05:10:28.200918 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:10:28.200929 | orchestrator | Monday 06 April 2026 05:10:23 +0000 (0:00:00.523) 0:02:52.734 ********** 2026-04-06 05:10:28.200940 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.200950 | orchestrator | 2026-04-06 05:10:28.200961 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:10:28.200972 | orchestrator | Monday 06 April 2026 05:10:23 +0000 (0:00:00.393) 0:02:53.127 ********** 2026-04-06 05:10:28.200982 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.200993 | orchestrator | 2026-04-06 05:10:28.201027 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:10:28.201038 | orchestrator | Monday 06 April 2026 05:10:23 +0000 (0:00:00.142) 0:02:53.269 ********** 2026-04-06 05:10:28.201048 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.201059 | orchestrator | 2026-04-06 05:10:28.201070 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:10:28.201082 | orchestrator | Monday 06 April 2026 05:10:23 +0000 (0:00:00.167) 0:02:53.437 ********** 2026-04-06 05:10:28.201092 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:28.201103 | orchestrator | 2026-04-06 05:10:28.201114 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:10:28.201124 | orchestrator | Monday 06 April 2026 05:10:23 +0000 (0:00:00.157) 0:02:53.595 ********** 2026-04-06 05:10:28.201135 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.201146 | orchestrator | 2026-04-06 05:10:28.201156 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:10:28.201167 | orchestrator | Monday 06 April 2026 05:10:24 +0000 (0:00:00.149) 0:02:53.745 ********** 2026-04-06 05:10:28.201178 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:10:28.201188 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:10:28.201199 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:10:28.201210 | orchestrator | 2026-04-06 05:10:28.201225 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:10:28.201236 | orchestrator | Monday 06 April 2026 05:10:24 +0000 (0:00:00.667) 0:02:54.413 ********** 2026-04-06 05:10:28.201247 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:28.201258 | orchestrator | 2026-04-06 05:10:28.201268 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:10:28.201279 | orchestrator | Monday 06 April 2026 05:10:24 +0000 (0:00:00.267) 0:02:54.680 ********** 2026-04-06 05:10:28.201290 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:10:28.201301 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:10:28.201311 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:10:28.201322 | orchestrator | 2026-04-06 05:10:28.201333 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:10:28.201343 | orchestrator | Monday 06 April 2026 05:10:26 +0000 (0:00:01.830) 0:02:56.511 ********** 2026-04-06 05:10:28.201354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 05:10:28.201365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 05:10:28.201375 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 05:10:28.201386 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:28.201396 | orchestrator | 2026-04-06 05:10:28.201407 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:10:28.201426 | orchestrator | Monday 06 April 2026 05:10:27 +0000 (0:00:00.424) 0:02:56.936 ********** 2026-04-06 05:10:28.201438 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:10:28.201451 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:10:28.201462 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:10:28.201472 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:28.201483 | orchestrator | 2026-04-06 05:10:28.201494 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:10:28.201505 | orchestrator | Monday 06 April 2026 05:10:28 +0000 (0:00:00.914) 0:02:57.850 ********** 2026-04-06 05:10:28.201523 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:32.322485 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:32.322590 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:32.322606 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.322620 | orchestrator | 2026-04-06 05:10:32.322632 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:10:32.322644 | orchestrator | Monday 06 April 2026 05:10:28 +0000 (0:00:00.161) 0:02:58.012 ********** 2026-04-06 05:10:32.322675 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7ab3f7ebb0fe', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:10:25.490589', 'end': '2026-04-06 05:10:25.544977', 'delta': '0:00:00.054388', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7ab3f7ebb0fe'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:10:32.322690 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '46d5ea15fe96', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:10:26.054312', 'end': '2026-04-06 05:10:26.117343', 'delta': '0:00:00.063031', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['46d5ea15fe96'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:10:32.322722 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a87eea657fd7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:10:26.613338', 'end': '2026-04-06 05:10:26.660234', 'delta': '0:00:00.046896', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a87eea657fd7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:10:32.322733 | orchestrator | 2026-04-06 05:10:32.322745 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:10:32.322756 | orchestrator | Monday 06 April 2026 05:10:28 +0000 (0:00:00.208) 0:02:58.220 ********** 2026-04-06 05:10:32.322767 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:32.322779 | orchestrator | 2026-04-06 05:10:32.322790 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:10:32.322801 | orchestrator | Monday 06 April 2026 05:10:28 +0000 (0:00:00.274) 0:02:58.495 ********** 2026-04-06 05:10:32.322811 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.322823 | orchestrator | 2026-04-06 05:10:32.322834 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:10:32.322935 | orchestrator | Monday 06 April 2026 05:10:29 +0000 (0:00:00.233) 0:02:58.729 ********** 2026-04-06 05:10:32.322949 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:32.322960 | orchestrator | 2026-04-06 05:10:32.322970 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:10:32.322981 | orchestrator | Monday 06 April 2026 05:10:29 +0000 (0:00:00.418) 0:02:59.147 ********** 2026-04-06 05:10:32.323079 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-04-06 05:10:32.323096 | orchestrator | 2026-04-06 05:10:32.323109 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:10:32.323122 | orchestrator | Monday 06 April 2026 05:10:30 +0000 (0:00:01.384) 0:03:00.531 ********** 2026-04-06 05:10:32.323134 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:32.323147 | orchestrator | 2026-04-06 05:10:32.323159 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:10:32.323171 | orchestrator | Monday 06 April 2026 05:10:30 +0000 (0:00:00.162) 0:03:00.694 ********** 2026-04-06 05:10:32.323185 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.323197 | orchestrator | 2026-04-06 05:10:32.323209 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:10:32.323222 | orchestrator | Monday 06 April 2026 05:10:31 +0000 (0:00:00.124) 0:03:00.819 ********** 2026-04-06 05:10:32.323235 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.323248 | orchestrator | 2026-04-06 05:10:32.323261 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:10:32.323273 | orchestrator | Monday 06 April 2026 05:10:31 +0000 (0:00:00.217) 0:03:01.037 ********** 2026-04-06 05:10:32.323285 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.323298 | orchestrator | 2026-04-06 05:10:32.323311 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:10:32.323323 | orchestrator | Monday 06 April 2026 05:10:31 +0000 (0:00:00.117) 0:03:01.154 ********** 2026-04-06 05:10:32.323335 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.323348 | orchestrator | 2026-04-06 05:10:32.323371 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:10:32.323384 | orchestrator | Monday 06 April 2026 05:10:31 +0000 (0:00:00.131) 0:03:01.285 ********** 2026-04-06 05:10:32.323397 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.323408 | orchestrator | 2026-04-06 05:10:32.323419 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:10:32.323430 | orchestrator | Monday 06 April 2026 05:10:31 +0000 (0:00:00.135) 0:03:01.421 ********** 2026-04-06 05:10:32.323440 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.323451 | orchestrator | 2026-04-06 05:10:32.323462 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:10:32.323473 | orchestrator | Monday 06 April 2026 05:10:31 +0000 (0:00:00.134) 0:03:01.555 ********** 2026-04-06 05:10:32.323491 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.323502 | orchestrator | 2026-04-06 05:10:32.323513 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:10:32.323523 | orchestrator | Monday 06 April 2026 05:10:31 +0000 (0:00:00.131) 0:03:01.687 ********** 2026-04-06 05:10:32.323534 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.323545 | orchestrator | 2026-04-06 05:10:32.323555 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:10:32.323566 | orchestrator | Monday 06 April 2026 05:10:32 +0000 (0:00:00.132) 0:03:01.820 ********** 2026-04-06 05:10:32.323577 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.323588 | orchestrator | 2026-04-06 05:10:32.323598 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:10:32.323609 | orchestrator | Monday 06 April 2026 05:10:32 +0000 (0:00:00.119) 0:03:01.939 ********** 2026-04-06 05:10:32.323621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:10:32.323633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:10:32.323644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:10:32.323657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:10:32.323677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:10:32.579282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:10:32.579378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:10:32.579422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23f8d4f9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:10:32.579450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:10:32.579470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:10:32.579516 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:32.579538 | orchestrator | 2026-04-06 05:10:32.579559 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:10:32.579581 | orchestrator | Monday 06 April 2026 05:10:32 +0000 (0:00:00.248) 0:03:02.188 ********** 2026-04-06 05:10:32.579625 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:32.579648 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:32.579673 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:32.579687 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:32.579699 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:32.579710 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:32.579741 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:41.988543 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23f8d4f9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:41.988690 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:41.988713 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:10:41.988749 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:41.988764 | orchestrator | 2026-04-06 05:10:41.988776 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:10:41.988789 | orchestrator | Monday 06 April 2026 05:10:33 +0000 (0:00:00.535) 0:03:02.723 ********** 2026-04-06 05:10:41.988805 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:41.988825 | orchestrator | 2026-04-06 05:10:41.988844 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:10:41.988863 | orchestrator | Monday 06 April 2026 05:10:33 +0000 (0:00:00.545) 0:03:03.269 ********** 2026-04-06 05:10:41.988875 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:41.988887 | orchestrator | 2026-04-06 05:10:41.988904 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:10:41.988945 | orchestrator | Monday 06 April 2026 05:10:33 +0000 (0:00:00.127) 0:03:03.396 ********** 2026-04-06 05:10:41.988967 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:10:41.988978 | orchestrator | 2026-04-06 05:10:41.988989 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:10:41.989027 | orchestrator | Monday 06 April 2026 05:10:34 +0000 (0:00:00.503) 0:03:03.899 ********** 2026-04-06 05:10:41.989041 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:41.989055 | orchestrator | 2026-04-06 05:10:41.989098 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:10:41.989112 | orchestrator | Monday 06 April 2026 05:10:34 +0000 (0:00:00.127) 0:03:04.027 ********** 2026-04-06 05:10:41.989125 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:41.989138 | orchestrator | 2026-04-06 05:10:41.989150 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:10:41.989162 | orchestrator | Monday 06 April 2026 05:10:34 +0000 (0:00:00.239) 0:03:04.266 ********** 2026-04-06 05:10:41.989175 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:41.989191 | orchestrator | 2026-04-06 05:10:41.989210 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:10:41.989229 | orchestrator | Monday 06 April 2026 05:10:34 +0000 (0:00:00.144) 0:03:04.411 ********** 2026-04-06 05:10:41.989261 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:10:41.989282 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-06 05:10:41.989303 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-06 05:10:41.989323 | orchestrator | 2026-04-06 05:10:41.989343 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:10:41.989362 | orchestrator | Monday 06 April 2026 05:10:35 +0000 (0:00:00.724) 0:03:05.135 ********** 2026-04-06 05:10:41.989381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 05:10:41.989402 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 05:10:41.989421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 05:10:41.989441 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:41.989453 | orchestrator | 2026-04-06 05:10:41.989463 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:10:41.989474 | orchestrator | Monday 06 April 2026 05:10:35 +0000 (0:00:00.181) 0:03:05.317 ********** 2026-04-06 05:10:41.989485 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:41.989496 | orchestrator | 2026-04-06 05:10:41.989507 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:10:41.989517 | orchestrator | Monday 06 April 2026 05:10:35 +0000 (0:00:00.138) 0:03:05.455 ********** 2026-04-06 05:10:41.989540 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:10:41.989551 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:10:41.989563 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:10:41.989573 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:10:41.989588 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:10:41.989607 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:10:41.989625 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:10:41.989643 | orchestrator | 2026-04-06 05:10:41.989661 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:10:41.989679 | orchestrator | Monday 06 April 2026 05:10:36 +0000 (0:00:01.126) 0:03:06.581 ********** 2026-04-06 05:10:41.989698 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:10:41.989718 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:10:41.989730 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:10:41.989741 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:10:41.989752 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:10:41.989762 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:10:41.989774 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:10:41.989784 | orchestrator | 2026-04-06 05:10:41.989795 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-06 05:10:41.989806 | orchestrator | Monday 06 April 2026 05:10:38 +0000 (0:00:01.664) 0:03:08.246 ********** 2026-04-06 05:10:41.989817 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-04-06 05:10:41.989828 | orchestrator | 2026-04-06 05:10:41.989838 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-06 05:10:41.989849 | orchestrator | Monday 06 April 2026 05:10:40 +0000 (0:00:01.824) 0:03:10.071 ********** 2026-04-06 05:10:41.989860 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:41.989871 | orchestrator | 2026-04-06 05:10:41.989881 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-06 05:10:41.989892 | orchestrator | Monday 06 April 2026 05:10:40 +0000 (0:00:00.254) 0:03:10.326 ********** 2026-04-06 05:10:41.989903 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:10:41.989913 | orchestrator | 2026-04-06 05:10:41.989924 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-06 05:10:41.989935 | orchestrator | Monday 06 April 2026 05:10:40 +0000 (0:00:00.145) 0:03:10.471 ********** 2026-04-06 05:10:41.989945 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-04-06 05:10:41.989956 | orchestrator | 2026-04-06 05:10:41.989967 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-06 05:10:41.989988 | orchestrator | Monday 06 April 2026 05:10:41 +0000 (0:00:01.229) 0:03:11.701 ********** 2026-04-06 05:11:07.893568 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.893684 | orchestrator | 2026-04-06 05:11:07.893701 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-06 05:11:07.893714 | orchestrator | Monday 06 April 2026 05:10:42 +0000 (0:00:00.166) 0:03:11.867 ********** 2026-04-06 05:11:07.893726 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:11:07.893737 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:11:07.893748 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:11:07.893783 | orchestrator | 2026-04-06 05:11:07.893796 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-06 05:11:07.893806 | orchestrator | Monday 06 April 2026 05:10:43 +0000 (0:00:01.436) 0:03:13.303 ********** 2026-04-06 05:11:07.893817 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-04-06 05:11:07.893828 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-04-06 05:11:07.893855 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-04-06 05:11:07.893868 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-04-06 05:11:07.893878 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-04-06 05:11:07.893890 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-04-06 05:11:07.893900 | orchestrator | 2026-04-06 05:11:07.893911 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-06 05:11:07.893922 | orchestrator | Monday 06 April 2026 05:10:55 +0000 (0:00:11.778) 0:03:25.082 ********** 2026-04-06 05:11:07.893933 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:11:07.893944 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:11:07.893955 | orchestrator | 2026-04-06 05:11:07.893966 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-06 05:11:07.893976 | orchestrator | Monday 06 April 2026 05:10:58 +0000 (0:00:03.181) 0:03:28.263 ********** 2026-04-06 05:11:07.893987 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:11:07.894062 | orchestrator | 2026-04-06 05:11:07.894078 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:11:07.894089 | orchestrator | Monday 06 April 2026 05:11:00 +0000 (0:00:01.513) 0:03:29.777 ********** 2026-04-06 05:11:07.894100 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-06 05:11:07.894111 | orchestrator | 2026-04-06 05:11:07.894124 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:11:07.894138 | orchestrator | Monday 06 April 2026 05:11:00 +0000 (0:00:00.608) 0:03:30.386 ********** 2026-04-06 05:11:07.894152 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-06 05:11:07.894164 | orchestrator | 2026-04-06 05:11:07.894177 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:11:07.894190 | orchestrator | Monday 06 April 2026 05:11:01 +0000 (0:00:00.607) 0:03:30.993 ********** 2026-04-06 05:11:07.894203 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:07.894216 | orchestrator | 2026-04-06 05:11:07.894229 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:11:07.894242 | orchestrator | Monday 06 April 2026 05:11:02 +0000 (0:00:00.842) 0:03:31.836 ********** 2026-04-06 05:11:07.894254 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894267 | orchestrator | 2026-04-06 05:11:07.894280 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:11:07.894293 | orchestrator | Monday 06 April 2026 05:11:02 +0000 (0:00:00.138) 0:03:31.974 ********** 2026-04-06 05:11:07.894306 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894318 | orchestrator | 2026-04-06 05:11:07.894331 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:11:07.894344 | orchestrator | Monday 06 April 2026 05:11:02 +0000 (0:00:00.118) 0:03:32.093 ********** 2026-04-06 05:11:07.894356 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894370 | orchestrator | 2026-04-06 05:11:07.894382 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:11:07.894395 | orchestrator | Monday 06 April 2026 05:11:02 +0000 (0:00:00.134) 0:03:32.228 ********** 2026-04-06 05:11:07.894408 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:07.894430 | orchestrator | 2026-04-06 05:11:07.894443 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:11:07.894456 | orchestrator | Monday 06 April 2026 05:11:03 +0000 (0:00:00.550) 0:03:32.778 ********** 2026-04-06 05:11:07.894470 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894482 | orchestrator | 2026-04-06 05:11:07.894492 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:11:07.894503 | orchestrator | Monday 06 April 2026 05:11:03 +0000 (0:00:00.133) 0:03:32.912 ********** 2026-04-06 05:11:07.894514 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894524 | orchestrator | 2026-04-06 05:11:07.894535 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:11:07.894546 | orchestrator | Monday 06 April 2026 05:11:03 +0000 (0:00:00.138) 0:03:33.051 ********** 2026-04-06 05:11:07.894557 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:07.894568 | orchestrator | 2026-04-06 05:11:07.894579 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:11:07.894589 | orchestrator | Monday 06 April 2026 05:11:03 +0000 (0:00:00.546) 0:03:33.597 ********** 2026-04-06 05:11:07.894600 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:07.894611 | orchestrator | 2026-04-06 05:11:07.894639 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:11:07.894651 | orchestrator | Monday 06 April 2026 05:11:04 +0000 (0:00:00.579) 0:03:34.177 ********** 2026-04-06 05:11:07.894661 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894672 | orchestrator | 2026-04-06 05:11:07.894683 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:11:07.894694 | orchestrator | Monday 06 April 2026 05:11:04 +0000 (0:00:00.143) 0:03:34.321 ********** 2026-04-06 05:11:07.894705 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:07.894716 | orchestrator | 2026-04-06 05:11:07.894727 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:11:07.894738 | orchestrator | Monday 06 April 2026 05:11:04 +0000 (0:00:00.158) 0:03:34.479 ********** 2026-04-06 05:11:07.894749 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894760 | orchestrator | 2026-04-06 05:11:07.894771 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:11:07.894782 | orchestrator | Monday 06 April 2026 05:11:04 +0000 (0:00:00.133) 0:03:34.612 ********** 2026-04-06 05:11:07.894792 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894803 | orchestrator | 2026-04-06 05:11:07.894814 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:11:07.894831 | orchestrator | Monday 06 April 2026 05:11:05 +0000 (0:00:00.121) 0:03:34.734 ********** 2026-04-06 05:11:07.894842 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894853 | orchestrator | 2026-04-06 05:11:07.894864 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:11:07.894875 | orchestrator | Monday 06 April 2026 05:11:05 +0000 (0:00:00.411) 0:03:35.145 ********** 2026-04-06 05:11:07.894886 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894896 | orchestrator | 2026-04-06 05:11:07.894907 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:11:07.894918 | orchestrator | Monday 06 April 2026 05:11:05 +0000 (0:00:00.155) 0:03:35.300 ********** 2026-04-06 05:11:07.894929 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.894939 | orchestrator | 2026-04-06 05:11:07.894950 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:11:07.894961 | orchestrator | Monday 06 April 2026 05:11:05 +0000 (0:00:00.148) 0:03:35.449 ********** 2026-04-06 05:11:07.894972 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:07.894983 | orchestrator | 2026-04-06 05:11:07.894994 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:11:07.895038 | orchestrator | Monday 06 April 2026 05:11:05 +0000 (0:00:00.165) 0:03:35.614 ********** 2026-04-06 05:11:07.895056 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:07.895092 | orchestrator | 2026-04-06 05:11:07.895112 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:11:07.895130 | orchestrator | Monday 06 April 2026 05:11:06 +0000 (0:00:00.166) 0:03:35.780 ********** 2026-04-06 05:11:07.895145 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:07.895156 | orchestrator | 2026-04-06 05:11:07.895167 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:11:07.895178 | orchestrator | Monday 06 April 2026 05:11:06 +0000 (0:00:00.246) 0:03:36.026 ********** 2026-04-06 05:11:07.895189 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.895200 | orchestrator | 2026-04-06 05:11:07.895211 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:11:07.895222 | orchestrator | Monday 06 April 2026 05:11:06 +0000 (0:00:00.127) 0:03:36.154 ********** 2026-04-06 05:11:07.895232 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.895243 | orchestrator | 2026-04-06 05:11:07.895254 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:11:07.895265 | orchestrator | Monday 06 April 2026 05:11:06 +0000 (0:00:00.123) 0:03:36.278 ********** 2026-04-06 05:11:07.895276 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.895287 | orchestrator | 2026-04-06 05:11:07.895298 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:11:07.895308 | orchestrator | Monday 06 April 2026 05:11:06 +0000 (0:00:00.156) 0:03:36.435 ********** 2026-04-06 05:11:07.895319 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.895330 | orchestrator | 2026-04-06 05:11:07.895346 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:11:07.895357 | orchestrator | Monday 06 April 2026 05:11:06 +0000 (0:00:00.117) 0:03:36.552 ********** 2026-04-06 05:11:07.895368 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.895378 | orchestrator | 2026-04-06 05:11:07.895389 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:11:07.895400 | orchestrator | Monday 06 April 2026 05:11:06 +0000 (0:00:00.132) 0:03:36.685 ********** 2026-04-06 05:11:07.895410 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.895421 | orchestrator | 2026-04-06 05:11:07.895432 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:11:07.895443 | orchestrator | Monday 06 April 2026 05:11:07 +0000 (0:00:00.116) 0:03:36.801 ********** 2026-04-06 05:11:07.895454 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.895465 | orchestrator | 2026-04-06 05:11:07.895476 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:11:07.895486 | orchestrator | Monday 06 April 2026 05:11:07 +0000 (0:00:00.126) 0:03:36.928 ********** 2026-04-06 05:11:07.895497 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.895508 | orchestrator | 2026-04-06 05:11:07.895519 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:11:07.895529 | orchestrator | Monday 06 April 2026 05:11:07 +0000 (0:00:00.402) 0:03:37.331 ********** 2026-04-06 05:11:07.895540 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.895551 | orchestrator | 2026-04-06 05:11:07.895562 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:11:07.895572 | orchestrator | Monday 06 April 2026 05:11:07 +0000 (0:00:00.135) 0:03:37.466 ********** 2026-04-06 05:11:07.895583 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:07.895594 | orchestrator | 2026-04-06 05:11:07.895605 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:11:07.895616 | orchestrator | Monday 06 April 2026 05:11:07 +0000 (0:00:00.136) 0:03:37.602 ********** 2026-04-06 05:11:26.667191 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.667319 | orchestrator | 2026-04-06 05:11:26.667342 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:11:26.667358 | orchestrator | Monday 06 April 2026 05:11:08 +0000 (0:00:00.149) 0:03:37.752 ********** 2026-04-06 05:11:26.667374 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.667417 | orchestrator | 2026-04-06 05:11:26.667434 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:11:26.667474 | orchestrator | Monday 06 April 2026 05:11:08 +0000 (0:00:00.188) 0:03:37.940 ********** 2026-04-06 05:11:26.667490 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:26.667507 | orchestrator | 2026-04-06 05:11:26.667521 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:11:26.667536 | orchestrator | Monday 06 April 2026 05:11:09 +0000 (0:00:00.985) 0:03:38.926 ********** 2026-04-06 05:11:26.667549 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:26.667563 | orchestrator | 2026-04-06 05:11:26.667572 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:11:26.667580 | orchestrator | Monday 06 April 2026 05:11:10 +0000 (0:00:01.559) 0:03:40.486 ********** 2026-04-06 05:11:26.667603 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-06 05:11:26.667617 | orchestrator | 2026-04-06 05:11:26.667631 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:11:26.667644 | orchestrator | Monday 06 April 2026 05:11:11 +0000 (0:00:00.568) 0:03:41.054 ********** 2026-04-06 05:11:26.667657 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.667671 | orchestrator | 2026-04-06 05:11:26.667685 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:11:26.667699 | orchestrator | Monday 06 April 2026 05:11:11 +0000 (0:00:00.127) 0:03:41.182 ********** 2026-04-06 05:11:26.667713 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.667726 | orchestrator | 2026-04-06 05:11:26.667740 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:11:26.667753 | orchestrator | Monday 06 April 2026 05:11:11 +0000 (0:00:00.132) 0:03:41.314 ********** 2026-04-06 05:11:26.667767 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:11:26.667781 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:11:26.667795 | orchestrator | 2026-04-06 05:11:26.667808 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:11:26.667821 | orchestrator | Monday 06 April 2026 05:11:12 +0000 (0:00:00.839) 0:03:42.154 ********** 2026-04-06 05:11:26.667835 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:26.667848 | orchestrator | 2026-04-06 05:11:26.667861 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:11:26.667875 | orchestrator | Monday 06 April 2026 05:11:13 +0000 (0:00:01.277) 0:03:43.431 ********** 2026-04-06 05:11:26.667888 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.667901 | orchestrator | 2026-04-06 05:11:26.667910 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:11:26.667917 | orchestrator | Monday 06 April 2026 05:11:13 +0000 (0:00:00.145) 0:03:43.577 ********** 2026-04-06 05:11:26.667925 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.667933 | orchestrator | 2026-04-06 05:11:26.667941 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:11:26.667949 | orchestrator | Monday 06 April 2026 05:11:13 +0000 (0:00:00.129) 0:03:43.706 ********** 2026-04-06 05:11:26.667959 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.667972 | orchestrator | 2026-04-06 05:11:26.667985 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:11:26.668018 | orchestrator | Monday 06 April 2026 05:11:14 +0000 (0:00:00.131) 0:03:43.838 ********** 2026-04-06 05:11:26.668032 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-06 05:11:26.668047 | orchestrator | 2026-04-06 05:11:26.668074 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:11:26.668090 | orchestrator | Monday 06 April 2026 05:11:14 +0000 (0:00:00.641) 0:03:44.479 ********** 2026-04-06 05:11:26.668104 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:26.668131 | orchestrator | 2026-04-06 05:11:26.668146 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:11:26.668159 | orchestrator | Monday 06 April 2026 05:11:15 +0000 (0:00:00.739) 0:03:45.219 ********** 2026-04-06 05:11:26.668174 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:11:26.668187 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:11:26.668201 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:11:26.668209 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668217 | orchestrator | 2026-04-06 05:11:26.668225 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:11:26.668233 | orchestrator | Monday 06 April 2026 05:11:15 +0000 (0:00:00.149) 0:03:45.369 ********** 2026-04-06 05:11:26.668240 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668248 | orchestrator | 2026-04-06 05:11:26.668256 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:11:26.668264 | orchestrator | Monday 06 April 2026 05:11:15 +0000 (0:00:00.138) 0:03:45.508 ********** 2026-04-06 05:11:26.668272 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668280 | orchestrator | 2026-04-06 05:11:26.668288 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:11:26.668295 | orchestrator | Monday 06 April 2026 05:11:15 +0000 (0:00:00.181) 0:03:45.690 ********** 2026-04-06 05:11:26.668304 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668318 | orchestrator | 2026-04-06 05:11:26.668332 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:11:26.668363 | orchestrator | Monday 06 April 2026 05:11:16 +0000 (0:00:00.152) 0:03:45.842 ********** 2026-04-06 05:11:26.668378 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668392 | orchestrator | 2026-04-06 05:11:26.668406 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:11:26.668419 | orchestrator | Monday 06 April 2026 05:11:16 +0000 (0:00:00.150) 0:03:45.993 ********** 2026-04-06 05:11:26.668432 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668446 | orchestrator | 2026-04-06 05:11:26.668459 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:11:26.668471 | orchestrator | Monday 06 April 2026 05:11:16 +0000 (0:00:00.153) 0:03:46.146 ********** 2026-04-06 05:11:26.668478 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:26.668486 | orchestrator | 2026-04-06 05:11:26.668494 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:11:26.668502 | orchestrator | Monday 06 April 2026 05:11:18 +0000 (0:00:01.950) 0:03:48.097 ********** 2026-04-06 05:11:26.668509 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:26.668521 | orchestrator | 2026-04-06 05:11:26.668534 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:11:26.668546 | orchestrator | Monday 06 April 2026 05:11:18 +0000 (0:00:00.144) 0:03:48.241 ********** 2026-04-06 05:11:26.668567 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-06 05:11:26.668580 | orchestrator | 2026-04-06 05:11:26.668594 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:11:26.668606 | orchestrator | Monday 06 April 2026 05:11:19 +0000 (0:00:00.601) 0:03:48.843 ********** 2026-04-06 05:11:26.668620 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668634 | orchestrator | 2026-04-06 05:11:26.668647 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:11:26.668660 | orchestrator | Monday 06 April 2026 05:11:19 +0000 (0:00:00.153) 0:03:48.997 ********** 2026-04-06 05:11:26.668674 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668687 | orchestrator | 2026-04-06 05:11:26.668701 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:11:26.668728 | orchestrator | Monday 06 April 2026 05:11:19 +0000 (0:00:00.148) 0:03:49.145 ********** 2026-04-06 05:11:26.668745 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668752 | orchestrator | 2026-04-06 05:11:26.668760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:11:26.668769 | orchestrator | Monday 06 April 2026 05:11:19 +0000 (0:00:00.149) 0:03:49.295 ********** 2026-04-06 05:11:26.668782 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668795 | orchestrator | 2026-04-06 05:11:26.668809 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:11:26.668823 | orchestrator | Monday 06 April 2026 05:11:19 +0000 (0:00:00.147) 0:03:49.442 ********** 2026-04-06 05:11:26.668832 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668840 | orchestrator | 2026-04-06 05:11:26.668848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:11:26.668856 | orchestrator | Monday 06 April 2026 05:11:19 +0000 (0:00:00.146) 0:03:49.588 ********** 2026-04-06 05:11:26.668864 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668872 | orchestrator | 2026-04-06 05:11:26.668880 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:11:26.668891 | orchestrator | Monday 06 April 2026 05:11:20 +0000 (0:00:00.163) 0:03:49.752 ********** 2026-04-06 05:11:26.668904 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668917 | orchestrator | 2026-04-06 05:11:26.668931 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:11:26.668944 | orchestrator | Monday 06 April 2026 05:11:20 +0000 (0:00:00.155) 0:03:49.907 ********** 2026-04-06 05:11:26.668958 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:26.668971 | orchestrator | 2026-04-06 05:11:26.668984 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:11:26.669034 | orchestrator | Monday 06 April 2026 05:11:20 +0000 (0:00:00.148) 0:03:50.055 ********** 2026-04-06 05:11:26.669051 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:26.669065 | orchestrator | 2026-04-06 05:11:26.669078 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:11:26.669091 | orchestrator | Monday 06 April 2026 05:11:20 +0000 (0:00:00.510) 0:03:50.565 ********** 2026-04-06 05:11:26.669105 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-06 05:11:26.669119 | orchestrator | 2026-04-06 05:11:26.669132 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:11:26.669145 | orchestrator | Monday 06 April 2026 05:11:21 +0000 (0:00:00.575) 0:03:51.141 ********** 2026-04-06 05:11:26.669158 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-06 05:11:26.669170 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-06 05:11:26.669178 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-06 05:11:26.669186 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-06 05:11:26.669193 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-06 05:11:26.669201 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-06 05:11:26.669211 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-06 05:11:26.669224 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:11:26.669238 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:11:26.669251 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:11:26.669264 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:11:26.669276 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:11:26.669290 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:11:26.669298 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:11:26.669313 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-06 05:11:39.374330 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-06 05:11:39.374478 | orchestrator | 2026-04-06 05:11:39.374496 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:11:39.374509 | orchestrator | Monday 06 April 2026 05:11:27 +0000 (0:00:05.687) 0:03:56.828 ********** 2026-04-06 05:11:39.374520 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374533 | orchestrator | 2026-04-06 05:11:39.374544 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:11:39.374554 | orchestrator | Monday 06 April 2026 05:11:27 +0000 (0:00:00.141) 0:03:56.970 ********** 2026-04-06 05:11:39.374565 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374576 | orchestrator | 2026-04-06 05:11:39.374586 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:11:39.374597 | orchestrator | Monday 06 April 2026 05:11:27 +0000 (0:00:00.132) 0:03:57.103 ********** 2026-04-06 05:11:39.374608 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374619 | orchestrator | 2026-04-06 05:11:39.374633 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:11:39.374646 | orchestrator | Monday 06 April 2026 05:11:27 +0000 (0:00:00.130) 0:03:57.233 ********** 2026-04-06 05:11:39.374672 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374686 | orchestrator | 2026-04-06 05:11:39.374698 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:11:39.374710 | orchestrator | Monday 06 April 2026 05:11:27 +0000 (0:00:00.136) 0:03:57.369 ********** 2026-04-06 05:11:39.374720 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374731 | orchestrator | 2026-04-06 05:11:39.374742 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:11:39.374753 | orchestrator | Monday 06 April 2026 05:11:27 +0000 (0:00:00.126) 0:03:57.496 ********** 2026-04-06 05:11:39.374763 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374774 | orchestrator | 2026-04-06 05:11:39.374785 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:11:39.374797 | orchestrator | Monday 06 April 2026 05:11:27 +0000 (0:00:00.142) 0:03:57.638 ********** 2026-04-06 05:11:39.374807 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374818 | orchestrator | 2026-04-06 05:11:39.374829 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:11:39.374840 | orchestrator | Monday 06 April 2026 05:11:28 +0000 (0:00:00.134) 0:03:57.773 ********** 2026-04-06 05:11:39.374851 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374862 | orchestrator | 2026-04-06 05:11:39.374872 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:11:39.374883 | orchestrator | Monday 06 April 2026 05:11:28 +0000 (0:00:00.122) 0:03:57.896 ********** 2026-04-06 05:11:39.374894 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374905 | orchestrator | 2026-04-06 05:11:39.374916 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:11:39.374927 | orchestrator | Monday 06 April 2026 05:11:28 +0000 (0:00:00.145) 0:03:58.041 ********** 2026-04-06 05:11:39.374937 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374948 | orchestrator | 2026-04-06 05:11:39.374959 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:11:39.374969 | orchestrator | Monday 06 April 2026 05:11:28 +0000 (0:00:00.112) 0:03:58.154 ********** 2026-04-06 05:11:39.374980 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.374992 | orchestrator | 2026-04-06 05:11:39.375042 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:11:39.375053 | orchestrator | Monday 06 April 2026 05:11:28 +0000 (0:00:00.360) 0:03:58.515 ********** 2026-04-06 05:11:39.375064 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375075 | orchestrator | 2026-04-06 05:11:39.375086 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:11:39.375096 | orchestrator | Monday 06 April 2026 05:11:28 +0000 (0:00:00.109) 0:03:58.624 ********** 2026-04-06 05:11:39.375116 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375127 | orchestrator | 2026-04-06 05:11:39.375138 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:11:39.375149 | orchestrator | Monday 06 April 2026 05:11:29 +0000 (0:00:00.207) 0:03:58.832 ********** 2026-04-06 05:11:39.375159 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375170 | orchestrator | 2026-04-06 05:11:39.375181 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:11:39.375192 | orchestrator | Monday 06 April 2026 05:11:29 +0000 (0:00:00.125) 0:03:58.957 ********** 2026-04-06 05:11:39.375202 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375213 | orchestrator | 2026-04-06 05:11:39.375223 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:11:39.375234 | orchestrator | Monday 06 April 2026 05:11:29 +0000 (0:00:00.207) 0:03:59.165 ********** 2026-04-06 05:11:39.375245 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375256 | orchestrator | 2026-04-06 05:11:39.375266 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:11:39.375277 | orchestrator | Monday 06 April 2026 05:11:29 +0000 (0:00:00.106) 0:03:59.271 ********** 2026-04-06 05:11:39.375288 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375298 | orchestrator | 2026-04-06 05:11:39.375309 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:11:39.375322 | orchestrator | Monday 06 April 2026 05:11:29 +0000 (0:00:00.096) 0:03:59.367 ********** 2026-04-06 05:11:39.375332 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375343 | orchestrator | 2026-04-06 05:11:39.375354 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:11:39.375364 | orchestrator | Monday 06 April 2026 05:11:29 +0000 (0:00:00.128) 0:03:59.496 ********** 2026-04-06 05:11:39.375375 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375386 | orchestrator | 2026-04-06 05:11:39.375414 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:11:39.375425 | orchestrator | Monday 06 April 2026 05:11:29 +0000 (0:00:00.103) 0:03:59.600 ********** 2026-04-06 05:11:39.375436 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375447 | orchestrator | 2026-04-06 05:11:39.375458 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:11:39.375468 | orchestrator | Monday 06 April 2026 05:11:30 +0000 (0:00:00.121) 0:03:59.722 ********** 2026-04-06 05:11:39.375479 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375490 | orchestrator | 2026-04-06 05:11:39.375501 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:11:39.375512 | orchestrator | Monday 06 April 2026 05:11:30 +0000 (0:00:00.119) 0:03:59.841 ********** 2026-04-06 05:11:39.375522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-06 05:11:39.375533 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 05:11:39.375544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 05:11:39.375555 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375565 | orchestrator | 2026-04-06 05:11:39.375576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:11:39.375593 | orchestrator | Monday 06 April 2026 05:11:30 +0000 (0:00:00.572) 0:04:00.414 ********** 2026-04-06 05:11:39.375604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-06 05:11:39.375615 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 05:11:39.375626 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 05:11:39.375636 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375647 | orchestrator | 2026-04-06 05:11:39.375658 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:11:39.375669 | orchestrator | Monday 06 April 2026 05:11:31 +0000 (0:00:00.575) 0:04:00.990 ********** 2026-04-06 05:11:39.375686 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-06 05:11:39.375697 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 05:11:39.375708 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 05:11:39.375719 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375729 | orchestrator | 2026-04-06 05:11:39.375740 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:11:39.375751 | orchestrator | Monday 06 April 2026 05:11:32 +0000 (0:00:00.737) 0:04:01.727 ********** 2026-04-06 05:11:39.375762 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375773 | orchestrator | 2026-04-06 05:11:39.375783 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:11:39.375794 | orchestrator | Monday 06 April 2026 05:11:32 +0000 (0:00:00.119) 0:04:01.847 ********** 2026-04-06 05:11:39.375805 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-06 05:11:39.375816 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.375826 | orchestrator | 2026-04-06 05:11:39.375837 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:11:39.375848 | orchestrator | Monday 06 April 2026 05:11:32 +0000 (0:00:00.489) 0:04:02.337 ********** 2026-04-06 05:11:39.375859 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:11:39.375869 | orchestrator | 2026-04-06 05:11:39.375880 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:11:39.375891 | orchestrator | Monday 06 April 2026 05:11:33 +0000 (0:00:00.833) 0:04:03.170 ********** 2026-04-06 05:11:39.375902 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:39.375913 | orchestrator | 2026-04-06 05:11:39.375923 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-06 05:11:39.375934 | orchestrator | Monday 06 April 2026 05:11:33 +0000 (0:00:00.150) 0:04:03.321 ********** 2026-04-06 05:11:39.375945 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-04-06 05:11:39.375956 | orchestrator | 2026-04-06 05:11:39.375967 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-06 05:11:39.375978 | orchestrator | Monday 06 April 2026 05:11:34 +0000 (0:00:00.638) 0:04:03.959 ********** 2026-04-06 05:11:39.375989 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:11:39.376023 | orchestrator | 2026-04-06 05:11:39.376034 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-06 05:11:39.376045 | orchestrator | Monday 06 April 2026 05:11:36 +0000 (0:00:02.061) 0:04:06.021 ********** 2026-04-06 05:11:39.376056 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:11:39.376067 | orchestrator | 2026-04-06 05:11:39.376078 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-06 05:11:39.376088 | orchestrator | Monday 06 April 2026 05:11:36 +0000 (0:00:00.172) 0:04:06.193 ********** 2026-04-06 05:11:39.376099 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:39.376110 | orchestrator | 2026-04-06 05:11:39.376121 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-06 05:11:39.376131 | orchestrator | Monday 06 April 2026 05:11:36 +0000 (0:00:00.165) 0:04:06.359 ********** 2026-04-06 05:11:39.376142 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:39.376153 | orchestrator | 2026-04-06 05:11:39.376164 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-06 05:11:39.376175 | orchestrator | Monday 06 April 2026 05:11:36 +0000 (0:00:00.173) 0:04:06.533 ********** 2026-04-06 05:11:39.376186 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:11:39.376197 | orchestrator | 2026-04-06 05:11:39.376207 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-06 05:11:39.376218 | orchestrator | Monday 06 April 2026 05:11:38 +0000 (0:00:01.399) 0:04:07.932 ********** 2026-04-06 05:11:39.376229 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:39.376240 | orchestrator | 2026-04-06 05:11:39.376258 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-06 05:11:39.376269 | orchestrator | Monday 06 April 2026 05:11:38 +0000 (0:00:00.607) 0:04:08.540 ********** 2026-04-06 05:11:39.376280 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:11:39.376290 | orchestrator | 2026-04-06 05:11:39.376307 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-06 05:12:11.121194 | orchestrator | Monday 06 April 2026 05:11:39 +0000 (0:00:00.550) 0:04:09.090 ********** 2026-04-06 05:12:11.121308 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.121326 | orchestrator | 2026-04-06 05:12:11.121338 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-06 05:12:11.121350 | orchestrator | Monday 06 April 2026 05:11:39 +0000 (0:00:00.494) 0:04:09.585 ********** 2026-04-06 05:12:11.121361 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.121372 | orchestrator | 2026-04-06 05:12:11.121383 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-06 05:12:11.121394 | orchestrator | Monday 06 April 2026 05:11:40 +0000 (0:00:00.741) 0:04:10.327 ********** 2026-04-06 05:12:11.121405 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.121416 | orchestrator | 2026-04-06 05:12:11.121427 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-06 05:12:11.121438 | orchestrator | Monday 06 April 2026 05:11:41 +0000 (0:00:00.684) 0:04:11.012 ********** 2026-04-06 05:12:11.121450 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-06 05:12:11.121462 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-06 05:12:11.121489 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 05:12:11.121500 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-04-06 05:12:11.121511 | orchestrator | 2026-04-06 05:12:11.121522 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-06 05:12:11.121533 | orchestrator | Monday 06 April 2026 05:11:44 +0000 (0:00:02.798) 0:04:13.810 ********** 2026-04-06 05:12:11.121544 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:12:11.121555 | orchestrator | 2026-04-06 05:12:11.121566 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-06 05:12:11.121577 | orchestrator | Monday 06 April 2026 05:11:45 +0000 (0:00:01.091) 0:04:14.902 ********** 2026-04-06 05:12:11.121589 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.121600 | orchestrator | 2026-04-06 05:12:11.121611 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-06 05:12:11.121622 | orchestrator | Monday 06 April 2026 05:11:45 +0000 (0:00:00.155) 0:04:15.057 ********** 2026-04-06 05:12:11.121633 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.121643 | orchestrator | 2026-04-06 05:12:11.121654 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-06 05:12:11.121665 | orchestrator | Monday 06 April 2026 05:11:45 +0000 (0:00:00.145) 0:04:15.202 ********** 2026-04-06 05:12:11.121676 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.121687 | orchestrator | 2026-04-06 05:12:11.121698 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-06 05:12:11.121709 | orchestrator | Monday 06 April 2026 05:11:46 +0000 (0:00:00.744) 0:04:15.947 ********** 2026-04-06 05:12:11.121720 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.121733 | orchestrator | 2026-04-06 05:12:11.121746 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-06 05:12:11.121759 | orchestrator | Monday 06 April 2026 05:11:46 +0000 (0:00:00.525) 0:04:16.473 ********** 2026-04-06 05:12:11.121772 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:11.121786 | orchestrator | 2026-04-06 05:12:11.121799 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-06 05:12:11.121812 | orchestrator | Monday 06 April 2026 05:11:46 +0000 (0:00:00.130) 0:04:16.603 ********** 2026-04-06 05:12:11.121825 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-04-06 05:12:11.121840 | orchestrator | 2026-04-06 05:12:11.121875 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-06 05:12:11.121887 | orchestrator | Monday 06 April 2026 05:11:47 +0000 (0:00:00.831) 0:04:17.434 ********** 2026-04-06 05:12:11.121897 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:11.121908 | orchestrator | 2026-04-06 05:12:11.121919 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-06 05:12:11.121929 | orchestrator | Monday 06 April 2026 05:11:47 +0000 (0:00:00.107) 0:04:17.542 ********** 2026-04-06 05:12:11.121940 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:11.121951 | orchestrator | 2026-04-06 05:12:11.121961 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-06 05:12:11.121972 | orchestrator | Monday 06 April 2026 05:11:47 +0000 (0:00:00.116) 0:04:17.659 ********** 2026-04-06 05:12:11.121983 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-04-06 05:12:11.121994 | orchestrator | 2026-04-06 05:12:11.122083 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-06 05:12:11.122095 | orchestrator | Monday 06 April 2026 05:11:48 +0000 (0:00:00.567) 0:04:18.226 ********** 2026-04-06 05:12:11.122106 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.122117 | orchestrator | 2026-04-06 05:12:11.122128 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-06 05:12:11.122139 | orchestrator | Monday 06 April 2026 05:11:49 +0000 (0:00:01.295) 0:04:19.522 ********** 2026-04-06 05:12:11.122150 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.122161 | orchestrator | 2026-04-06 05:12:11.122172 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-06 05:12:11.122183 | orchestrator | Monday 06 April 2026 05:11:50 +0000 (0:00:01.001) 0:04:20.523 ********** 2026-04-06 05:12:11.122194 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.122205 | orchestrator | 2026-04-06 05:12:11.122216 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-06 05:12:11.122227 | orchestrator | Monday 06 April 2026 05:11:52 +0000 (0:00:01.473) 0:04:21.997 ********** 2026-04-06 05:12:11.122237 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:12:11.122249 | orchestrator | 2026-04-06 05:12:11.122259 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-06 05:12:11.122270 | orchestrator | Monday 06 April 2026 05:11:54 +0000 (0:00:02.365) 0:04:24.363 ********** 2026-04-06 05:12:11.122281 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-04-06 05:12:11.122292 | orchestrator | 2026-04-06 05:12:11.122319 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-06 05:12:11.122331 | orchestrator | Monday 06 April 2026 05:11:55 +0000 (0:00:00.581) 0:04:24.944 ********** 2026-04-06 05:12:11.122342 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.122353 | orchestrator | 2026-04-06 05:12:11.122363 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-06 05:12:11.122375 | orchestrator | Monday 06 April 2026 05:11:56 +0000 (0:00:01.214) 0:04:26.158 ********** 2026-04-06 05:12:11.122385 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:11.122396 | orchestrator | 2026-04-06 05:12:11.122407 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-06 05:12:11.122418 | orchestrator | Monday 06 April 2026 05:11:58 +0000 (0:00:02.317) 0:04:28.476 ********** 2026-04-06 05:12:11.122429 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:11.122440 | orchestrator | 2026-04-06 05:12:11.122451 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-06 05:12:11.122462 | orchestrator | Monday 06 April 2026 05:11:58 +0000 (0:00:00.122) 0:04:28.598 ********** 2026-04-06 05:12:11.122482 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-06 05:12:11.122507 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-06 05:12:11.122518 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-06 05:12:11.122529 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-06 05:12:11.122542 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-06 05:12:11.122554 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}])  2026-04-06 05:12:11.122568 | orchestrator | 2026-04-06 05:12:11.122579 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-06 05:12:11.122590 | orchestrator | Monday 06 April 2026 05:12:07 +0000 (0:00:09.041) 0:04:37.640 ********** 2026-04-06 05:12:11.122601 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:12:11.122611 | orchestrator | 2026-04-06 05:12:11.122622 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:12:11.122633 | orchestrator | Monday 06 April 2026 05:12:09 +0000 (0:00:01.519) 0:04:39.159 ********** 2026-04-06 05:12:11.122644 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:12:11.122655 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-06 05:12:11.122666 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-06 05:12:11.122677 | orchestrator | 2026-04-06 05:12:11.122688 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:12:11.122699 | orchestrator | Monday 06 April 2026 05:12:10 +0000 (0:00:01.157) 0:04:40.317 ********** 2026-04-06 05:12:11.122709 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 05:12:11.122720 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 05:12:11.122731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 05:12:11.122742 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:11.122753 | orchestrator | 2026-04-06 05:12:11.122763 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-06 05:12:11.122780 | orchestrator | Monday 06 April 2026 05:12:11 +0000 (0:00:00.515) 0:04:40.832 ********** 2026-04-06 05:12:22.439654 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:22.439791 | orchestrator | 2026-04-06 05:12:22.439808 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-06 05:12:22.439820 | orchestrator | Monday 06 April 2026 05:12:11 +0000 (0:00:00.136) 0:04:40.968 ********** 2026-04-06 05:12:22.439851 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:12:22.439862 | orchestrator | 2026-04-06 05:12:22.439872 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 05:12:22.439882 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-06 05:12:22.439891 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-06 05:12:22.439910 | orchestrator | Monday 06 April 2026 05:12:12 +0000 (0:00:01.420) 0:04:42.389 ********** 2026-04-06 05:12:22.439920 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:22.439929 | orchestrator | 2026-04-06 05:12:22.439939 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-06 05:12:22.439963 | orchestrator | Monday 06 April 2026 05:12:12 +0000 (0:00:00.131) 0:04:42.520 ********** 2026-04-06 05:12:22.439973 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:22.439982 | orchestrator | 2026-04-06 05:12:22.439992 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-06 05:12:22.440064 | orchestrator | Monday 06 April 2026 05:12:12 +0000 (0:00:00.139) 0:04:42.660 ********** 2026-04-06 05:12:22.440074 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:22.440083 | orchestrator | 2026-04-06 05:12:22.440093 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-06 05:12:22.440104 | orchestrator | Monday 06 April 2026 05:12:13 +0000 (0:00:00.445) 0:04:43.105 ********** 2026-04-06 05:12:22.440113 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:22.440123 | orchestrator | 2026-04-06 05:12:22.440133 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-06 05:12:22.440142 | orchestrator | Monday 06 April 2026 05:12:13 +0000 (0:00:00.129) 0:04:43.235 ********** 2026-04-06 05:12:22.440152 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:22.440162 | orchestrator | 2026-04-06 05:12:22.440171 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-06 05:12:22.440181 | orchestrator | Monday 06 April 2026 05:12:13 +0000 (0:00:00.136) 0:04:43.371 ********** 2026-04-06 05:12:22.440191 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:22.440202 | orchestrator | 2026-04-06 05:12:22.440213 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-06 05:12:22.440224 | orchestrator | Monday 06 April 2026 05:12:13 +0000 (0:00:00.134) 0:04:43.505 ********** 2026-04-06 05:12:22.440236 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:12:22.440247 | orchestrator | 2026-04-06 05:12:22.440258 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-06 05:12:22.440270 | orchestrator | 2026-04-06 05:12:22.440281 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-06 05:12:22.440292 | orchestrator | Monday 06 April 2026 05:12:14 +0000 (0:00:00.736) 0:04:44.242 ********** 2026-04-06 05:12:22.440303 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.440315 | orchestrator | 2026-04-06 05:12:22.440326 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-06 05:12:22.440338 | orchestrator | Monday 06 April 2026 05:12:14 +0000 (0:00:00.464) 0:04:44.707 ********** 2026-04-06 05:12:22.440349 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.440360 | orchestrator | 2026-04-06 05:12:22.440371 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-06 05:12:22.440383 | orchestrator | Monday 06 April 2026 05:12:15 +0000 (0:00:00.152) 0:04:44.860 ********** 2026-04-06 05:12:22.440394 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:22.440406 | orchestrator | 2026-04-06 05:12:22.440417 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-06 05:12:22.440428 | orchestrator | Monday 06 April 2026 05:12:15 +0000 (0:00:00.141) 0:04:45.001 ********** 2026-04-06 05:12:22.440439 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.440450 | orchestrator | 2026-04-06 05:12:22.440469 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:12:22.440482 | orchestrator | Monday 06 April 2026 05:12:15 +0000 (0:00:00.164) 0:04:45.166 ********** 2026-04-06 05:12:22.440494 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-06 05:12:22.440504 | orchestrator | 2026-04-06 05:12:22.440516 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:12:22.440527 | orchestrator | Monday 06 April 2026 05:12:15 +0000 (0:00:00.258) 0:04:45.424 ********** 2026-04-06 05:12:22.440538 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.440550 | orchestrator | 2026-04-06 05:12:22.440562 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:12:22.440573 | orchestrator | Monday 06 April 2026 05:12:16 +0000 (0:00:00.472) 0:04:45.897 ********** 2026-04-06 05:12:22.440583 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.440593 | orchestrator | 2026-04-06 05:12:22.440602 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:12:22.440612 | orchestrator | Monday 06 April 2026 05:12:16 +0000 (0:00:00.424) 0:04:46.322 ********** 2026-04-06 05:12:22.440622 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.440632 | orchestrator | 2026-04-06 05:12:22.440641 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:12:22.440651 | orchestrator | Monday 06 April 2026 05:12:17 +0000 (0:00:00.557) 0:04:46.879 ********** 2026-04-06 05:12:22.440660 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.440670 | orchestrator | 2026-04-06 05:12:22.440680 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:12:22.440690 | orchestrator | Monday 06 April 2026 05:12:17 +0000 (0:00:00.138) 0:04:47.017 ********** 2026-04-06 05:12:22.440699 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.440709 | orchestrator | 2026-04-06 05:12:22.440719 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:12:22.440743 | orchestrator | Monday 06 April 2026 05:12:17 +0000 (0:00:00.158) 0:04:47.177 ********** 2026-04-06 05:12:22.440753 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.440765 | orchestrator | 2026-04-06 05:12:22.440782 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:12:22.440799 | orchestrator | Monday 06 April 2026 05:12:17 +0000 (0:00:00.170) 0:04:47.347 ********** 2026-04-06 05:12:22.440814 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:22.440829 | orchestrator | 2026-04-06 05:12:22.440845 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:12:22.440860 | orchestrator | Monday 06 April 2026 05:12:17 +0000 (0:00:00.143) 0:04:47.490 ********** 2026-04-06 05:12:22.440875 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.440889 | orchestrator | 2026-04-06 05:12:22.440906 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:12:22.440920 | orchestrator | Monday 06 April 2026 05:12:17 +0000 (0:00:00.141) 0:04:47.631 ********** 2026-04-06 05:12:22.440937 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:12:22.440952 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:12:22.440978 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:12:22.441017 | orchestrator | 2026-04-06 05:12:22.441029 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:12:22.441039 | orchestrator | Monday 06 April 2026 05:12:18 +0000 (0:00:00.685) 0:04:48.317 ********** 2026-04-06 05:12:22.441048 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:22.441058 | orchestrator | 2026-04-06 05:12:22.441067 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:12:22.441077 | orchestrator | Monday 06 April 2026 05:12:18 +0000 (0:00:00.253) 0:04:48.571 ********** 2026-04-06 05:12:22.441086 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:12:22.441096 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:12:22.441114 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:12:22.441123 | orchestrator | 2026-04-06 05:12:22.441133 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:12:22.441143 | orchestrator | Monday 06 April 2026 05:12:21 +0000 (0:00:02.159) 0:04:50.730 ********** 2026-04-06 05:12:22.441152 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 05:12:22.441162 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 05:12:22.441172 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 05:12:22.441181 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:22.441191 | orchestrator | 2026-04-06 05:12:22.441200 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:12:22.441210 | orchestrator | Monday 06 April 2026 05:12:21 +0000 (0:00:00.416) 0:04:51.147 ********** 2026-04-06 05:12:22.441221 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:12:22.441234 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:12:22.441244 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:12:22.441254 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:22.441263 | orchestrator | 2026-04-06 05:12:22.441273 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:12:22.441283 | orchestrator | Monday 06 April 2026 05:12:22 +0000 (0:00:00.919) 0:04:52.066 ********** 2026-04-06 05:12:22.441295 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:22.441309 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:22.441329 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:26.571586 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.571708 | orchestrator | 2026-04-06 05:12:26.571733 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:12:26.571754 | orchestrator | Monday 06 April 2026 05:12:22 +0000 (0:00:00.184) 0:04:52.251 ********** 2026-04-06 05:12:26.571799 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:12:19.366753', 'end': '2026-04-06 05:12:19.419372', 'delta': '0:00:00.052619', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:12:26.571856 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '46d5ea15fe96', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:12:19.944160', 'end': '2026-04-06 05:12:19.996962', 'delta': '0:00:00.052802', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['46d5ea15fe96'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:12:26.571880 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'a87eea657fd7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:12:20.844108', 'end': '2026-04-06 05:12:20.881050', 'delta': '0:00:00.036942', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a87eea657fd7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:12:26.571900 | orchestrator | 2026-04-06 05:12:26.571920 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:12:26.571940 | orchestrator | Monday 06 April 2026 05:12:23 +0000 (0:00:00.510) 0:04:52.761 ********** 2026-04-06 05:12:26.571960 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:26.571980 | orchestrator | 2026-04-06 05:12:26.572026 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:12:26.572046 | orchestrator | Monday 06 April 2026 05:12:23 +0000 (0:00:00.261) 0:04:53.023 ********** 2026-04-06 05:12:26.572065 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.572083 | orchestrator | 2026-04-06 05:12:26.572102 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:12:26.572122 | orchestrator | Monday 06 April 2026 05:12:23 +0000 (0:00:00.252) 0:04:53.275 ********** 2026-04-06 05:12:26.572140 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:26.572158 | orchestrator | 2026-04-06 05:12:26.572176 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:12:26.572196 | orchestrator | Monday 06 April 2026 05:12:23 +0000 (0:00:00.156) 0:04:53.432 ********** 2026-04-06 05:12:26.572215 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:12:26.572233 | orchestrator | 2026-04-06 05:12:26.572252 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:12:26.572270 | orchestrator | Monday 06 April 2026 05:12:24 +0000 (0:00:00.945) 0:04:54.378 ********** 2026-04-06 05:12:26.572290 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:26.572308 | orchestrator | 2026-04-06 05:12:26.572327 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:12:26.572345 | orchestrator | Monday 06 April 2026 05:12:24 +0000 (0:00:00.167) 0:04:54.546 ********** 2026-04-06 05:12:26.572364 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.572381 | orchestrator | 2026-04-06 05:12:26.572393 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:12:26.572416 | orchestrator | Monday 06 April 2026 05:12:24 +0000 (0:00:00.130) 0:04:54.677 ********** 2026-04-06 05:12:26.572427 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.572438 | orchestrator | 2026-04-06 05:12:26.572449 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:12:26.572460 | orchestrator | Monday 06 April 2026 05:12:25 +0000 (0:00:00.222) 0:04:54.899 ********** 2026-04-06 05:12:26.572471 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.572482 | orchestrator | 2026-04-06 05:12:26.572513 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:12:26.572524 | orchestrator | Monday 06 April 2026 05:12:25 +0000 (0:00:00.141) 0:04:55.041 ********** 2026-04-06 05:12:26.572535 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.572546 | orchestrator | 2026-04-06 05:12:26.572557 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:12:26.572568 | orchestrator | Monday 06 April 2026 05:12:25 +0000 (0:00:00.125) 0:04:55.167 ********** 2026-04-06 05:12:26.572578 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.572589 | orchestrator | 2026-04-06 05:12:26.572600 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:12:26.572611 | orchestrator | Monday 06 April 2026 05:12:25 +0000 (0:00:00.139) 0:04:55.307 ********** 2026-04-06 05:12:26.572622 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.572633 | orchestrator | 2026-04-06 05:12:26.572643 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:12:26.572662 | orchestrator | Monday 06 April 2026 05:12:25 +0000 (0:00:00.143) 0:04:55.450 ********** 2026-04-06 05:12:26.572673 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.572684 | orchestrator | 2026-04-06 05:12:26.572695 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:12:26.572706 | orchestrator | Monday 06 April 2026 05:12:25 +0000 (0:00:00.145) 0:04:55.596 ********** 2026-04-06 05:12:26.572717 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.572727 | orchestrator | 2026-04-06 05:12:26.572738 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:12:26.572750 | orchestrator | Monday 06 April 2026 05:12:26 +0000 (0:00:00.121) 0:04:55.718 ********** 2026-04-06 05:12:26.572761 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.572771 | orchestrator | 2026-04-06 05:12:26.572782 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:12:26.572793 | orchestrator | Monday 06 April 2026 05:12:26 +0000 (0:00:00.419) 0:04:56.137 ********** 2026-04-06 05:12:26.572805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:12:26.572820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:12:26.572831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:12:26.572844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:12:26.572863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:12:26.572875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:12:26.572894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:12:26.836763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a48c2299', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:12:26.836929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:12:26.836958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:12:26.836976 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:26.837048 | orchestrator | 2026-04-06 05:12:26.837062 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:12:26.837073 | orchestrator | Monday 06 April 2026 05:12:26 +0000 (0:00:00.294) 0:04:56.431 ********** 2026-04-06 05:12:26.837086 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:26.837125 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:26.837137 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:26.837148 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:26.837160 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:26.837179 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:26.837190 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:26.837215 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a48c2299', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:40.412167 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:40.412278 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:12:40.412294 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.412306 | orchestrator | 2026-04-06 05:12:40.412318 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:12:40.412329 | orchestrator | Monday 06 April 2026 05:12:26 +0000 (0:00:00.232) 0:04:56.664 ********** 2026-04-06 05:12:40.412339 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:40.412350 | orchestrator | 2026-04-06 05:12:40.412360 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:12:40.412369 | orchestrator | Monday 06 April 2026 05:12:27 +0000 (0:00:00.527) 0:04:57.192 ********** 2026-04-06 05:12:40.412379 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:40.412389 | orchestrator | 2026-04-06 05:12:40.412399 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:12:40.412408 | orchestrator | Monday 06 April 2026 05:12:27 +0000 (0:00:00.122) 0:04:57.314 ********** 2026-04-06 05:12:40.412418 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:40.412428 | orchestrator | 2026-04-06 05:12:40.412438 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:12:40.412448 | orchestrator | Monday 06 April 2026 05:12:28 +0000 (0:00:00.488) 0:04:57.802 ********** 2026-04-06 05:12:40.412457 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.412467 | orchestrator | 2026-04-06 05:12:40.412477 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:12:40.412487 | orchestrator | Monday 06 April 2026 05:12:28 +0000 (0:00:00.125) 0:04:57.928 ********** 2026-04-06 05:12:40.412496 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.412506 | orchestrator | 2026-04-06 05:12:40.412516 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:12:40.412526 | orchestrator | Monday 06 April 2026 05:12:28 +0000 (0:00:00.254) 0:04:58.182 ********** 2026-04-06 05:12:40.412535 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.412545 | orchestrator | 2026-04-06 05:12:40.412555 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:12:40.412580 | orchestrator | Monday 06 April 2026 05:12:28 +0000 (0:00:00.154) 0:04:58.337 ********** 2026-04-06 05:12:40.412591 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-06 05:12:40.412601 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:12:40.412611 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-06 05:12:40.412620 | orchestrator | 2026-04-06 05:12:40.412630 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:12:40.412640 | orchestrator | Monday 06 April 2026 05:12:29 +0000 (0:00:00.999) 0:04:59.337 ********** 2026-04-06 05:12:40.412672 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 05:12:40.412683 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 05:12:40.412692 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 05:12:40.412704 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.412715 | orchestrator | 2026-04-06 05:12:40.412727 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:12:40.412738 | orchestrator | Monday 06 April 2026 05:12:29 +0000 (0:00:00.156) 0:04:59.494 ********** 2026-04-06 05:12:40.412749 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.412760 | orchestrator | 2026-04-06 05:12:40.412771 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:12:40.412782 | orchestrator | Monday 06 April 2026 05:12:29 +0000 (0:00:00.142) 0:04:59.636 ********** 2026-04-06 05:12:40.412794 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:12:40.412806 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:12:40.412817 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:12:40.412828 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:12:40.412840 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:12:40.412851 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:12:40.412877 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:12:40.412889 | orchestrator | 2026-04-06 05:12:40.412900 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:12:40.412911 | orchestrator | Monday 06 April 2026 05:12:31 +0000 (0:00:01.142) 0:05:00.779 ********** 2026-04-06 05:12:40.412922 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:12:40.412934 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:12:40.412945 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:12:40.412957 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:12:40.412969 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:12:40.412980 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:12:40.413013 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:12:40.413026 | orchestrator | 2026-04-06 05:12:40.413038 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-06 05:12:40.413049 | orchestrator | Monday 06 April 2026 05:12:33 +0000 (0:00:02.128) 0:05:02.907 ********** 2026-04-06 05:12:40.413060 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.413070 | orchestrator | 2026-04-06 05:12:40.413079 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-06 05:12:40.413089 | orchestrator | Monday 06 April 2026 05:12:33 +0000 (0:00:00.240) 0:05:03.148 ********** 2026-04-06 05:12:40.413099 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.413108 | orchestrator | 2026-04-06 05:12:40.413118 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-06 05:12:40.413128 | orchestrator | Monday 06 April 2026 05:12:33 +0000 (0:00:00.241) 0:05:03.390 ********** 2026-04-06 05:12:40.413138 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.413147 | orchestrator | 2026-04-06 05:12:40.413157 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-06 05:12:40.413167 | orchestrator | Monday 06 April 2026 05:12:33 +0000 (0:00:00.135) 0:05:03.525 ********** 2026-04-06 05:12:40.413176 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.413186 | orchestrator | 2026-04-06 05:12:40.413196 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-06 05:12:40.413213 | orchestrator | Monday 06 April 2026 05:12:34 +0000 (0:00:00.243) 0:05:03.768 ********** 2026-04-06 05:12:40.413223 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.413233 | orchestrator | 2026-04-06 05:12:40.413242 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-06 05:12:40.413252 | orchestrator | Monday 06 April 2026 05:12:34 +0000 (0:00:00.165) 0:05:03.934 ********** 2026-04-06 05:12:40.413262 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 05:12:40.413272 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 05:12:40.413281 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 05:12:40.413291 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.413300 | orchestrator | 2026-04-06 05:12:40.413310 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-06 05:12:40.413320 | orchestrator | Monday 06 April 2026 05:12:34 +0000 (0:00:00.414) 0:05:04.349 ********** 2026-04-06 05:12:40.413330 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-06 05:12:40.413339 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-06 05:12:40.413354 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-06 05:12:40.413364 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-06 05:12:40.413374 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-06 05:12:40.413384 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-06 05:12:40.413393 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:40.413403 | orchestrator | 2026-04-06 05:12:40.413413 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-06 05:12:40.413422 | orchestrator | Monday 06 April 2026 05:12:35 +0000 (0:00:01.026) 0:05:05.376 ********** 2026-04-06 05:12:40.413432 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:12:40.413442 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:12:40.413451 | orchestrator | 2026-04-06 05:12:40.413461 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-06 05:12:40.413471 | orchestrator | Monday 06 April 2026 05:12:38 +0000 (0:00:02.547) 0:05:07.923 ********** 2026-04-06 05:12:40.413481 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:12:40.413490 | orchestrator | 2026-04-06 05:12:40.413500 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:12:40.413510 | orchestrator | Monday 06 April 2026 05:12:39 +0000 (0:00:01.458) 0:05:09.381 ********** 2026-04-06 05:12:40.413520 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-06 05:12:40.413530 | orchestrator | 2026-04-06 05:12:40.413540 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:12:40.413550 | orchestrator | Monday 06 April 2026 05:12:39 +0000 (0:00:00.198) 0:05:09.580 ********** 2026-04-06 05:12:40.413559 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-06 05:12:40.413569 | orchestrator | 2026-04-06 05:12:40.413579 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:12:40.413594 | orchestrator | Monday 06 April 2026 05:12:40 +0000 (0:00:00.540) 0:05:10.121 ********** 2026-04-06 05:12:51.918703 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.918818 | orchestrator | 2026-04-06 05:12:51.918835 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:12:51.918849 | orchestrator | Monday 06 April 2026 05:12:40 +0000 (0:00:00.527) 0:05:10.648 ********** 2026-04-06 05:12:51.918860 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.918872 | orchestrator | 2026-04-06 05:12:51.918883 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:12:51.918954 | orchestrator | Monday 06 April 2026 05:12:41 +0000 (0:00:00.160) 0:05:10.809 ********** 2026-04-06 05:12:51.918966 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.918977 | orchestrator | 2026-04-06 05:12:51.919054 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:12:51.919069 | orchestrator | Monday 06 April 2026 05:12:41 +0000 (0:00:00.127) 0:05:10.937 ********** 2026-04-06 05:12:51.919080 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.919091 | orchestrator | 2026-04-06 05:12:51.919102 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:12:51.919113 | orchestrator | Monday 06 April 2026 05:12:41 +0000 (0:00:00.146) 0:05:11.084 ********** 2026-04-06 05:12:51.919124 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.919135 | orchestrator | 2026-04-06 05:12:51.919145 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:12:51.919156 | orchestrator | Monday 06 April 2026 05:12:41 +0000 (0:00:00.539) 0:05:11.624 ********** 2026-04-06 05:12:51.919167 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.919178 | orchestrator | 2026-04-06 05:12:51.919189 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:12:51.919201 | orchestrator | Monday 06 April 2026 05:12:42 +0000 (0:00:00.159) 0:05:11.783 ********** 2026-04-06 05:12:51.919212 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.919223 | orchestrator | 2026-04-06 05:12:51.919234 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:12:51.919248 | orchestrator | Monday 06 April 2026 05:12:42 +0000 (0:00:00.126) 0:05:11.909 ********** 2026-04-06 05:12:51.919260 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.919273 | orchestrator | 2026-04-06 05:12:51.919286 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:12:51.919304 | orchestrator | Monday 06 April 2026 05:12:42 +0000 (0:00:00.546) 0:05:12.456 ********** 2026-04-06 05:12:51.919323 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.919342 | orchestrator | 2026-04-06 05:12:51.919360 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:12:51.919379 | orchestrator | Monday 06 April 2026 05:12:43 +0000 (0:00:00.532) 0:05:12.988 ********** 2026-04-06 05:12:51.919398 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.919416 | orchestrator | 2026-04-06 05:12:51.919434 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:12:51.919452 | orchestrator | Monday 06 April 2026 05:12:43 +0000 (0:00:00.144) 0:05:13.133 ********** 2026-04-06 05:12:51.919470 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.919489 | orchestrator | 2026-04-06 05:12:51.919508 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:12:51.919528 | orchestrator | Monday 06 April 2026 05:12:43 +0000 (0:00:00.145) 0:05:13.278 ********** 2026-04-06 05:12:51.919548 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.919570 | orchestrator | 2026-04-06 05:12:51.919591 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:12:51.919610 | orchestrator | Monday 06 April 2026 05:12:43 +0000 (0:00:00.396) 0:05:13.675 ********** 2026-04-06 05:12:51.919629 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.919648 | orchestrator | 2026-04-06 05:12:51.919667 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:12:51.919708 | orchestrator | Monday 06 April 2026 05:12:44 +0000 (0:00:00.145) 0:05:13.820 ********** 2026-04-06 05:12:51.919730 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.919743 | orchestrator | 2026-04-06 05:12:51.919754 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:12:51.919765 | orchestrator | Monday 06 April 2026 05:12:44 +0000 (0:00:00.129) 0:05:13.950 ********** 2026-04-06 05:12:51.919776 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.919787 | orchestrator | 2026-04-06 05:12:51.919798 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:12:51.919821 | orchestrator | Monday 06 April 2026 05:12:44 +0000 (0:00:00.132) 0:05:14.082 ********** 2026-04-06 05:12:51.919832 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.919843 | orchestrator | 2026-04-06 05:12:51.919853 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:12:51.919864 | orchestrator | Monday 06 April 2026 05:12:44 +0000 (0:00:00.133) 0:05:14.215 ********** 2026-04-06 05:12:51.919875 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.919886 | orchestrator | 2026-04-06 05:12:51.919897 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:12:51.919908 | orchestrator | Monday 06 April 2026 05:12:44 +0000 (0:00:00.167) 0:05:14.383 ********** 2026-04-06 05:12:51.919919 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.919930 | orchestrator | 2026-04-06 05:12:51.919941 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:12:51.919952 | orchestrator | Monday 06 April 2026 05:12:44 +0000 (0:00:00.180) 0:05:14.563 ********** 2026-04-06 05:12:51.919963 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.919973 | orchestrator | 2026-04-06 05:12:51.919984 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:12:51.920019 | orchestrator | Monday 06 April 2026 05:12:45 +0000 (0:00:00.228) 0:05:14.792 ********** 2026-04-06 05:12:51.920030 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920041 | orchestrator | 2026-04-06 05:12:51.920052 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:12:51.920063 | orchestrator | Monday 06 April 2026 05:12:45 +0000 (0:00:00.135) 0:05:14.927 ********** 2026-04-06 05:12:51.920073 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920084 | orchestrator | 2026-04-06 05:12:51.920095 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:12:51.920126 | orchestrator | Monday 06 April 2026 05:12:45 +0000 (0:00:00.122) 0:05:15.050 ********** 2026-04-06 05:12:51.920138 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920149 | orchestrator | 2026-04-06 05:12:51.920160 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:12:51.920170 | orchestrator | Monday 06 April 2026 05:12:45 +0000 (0:00:00.129) 0:05:15.180 ********** 2026-04-06 05:12:51.920181 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920192 | orchestrator | 2026-04-06 05:12:51.920202 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:12:51.920213 | orchestrator | Monday 06 April 2026 05:12:45 +0000 (0:00:00.120) 0:05:15.300 ********** 2026-04-06 05:12:51.920223 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920234 | orchestrator | 2026-04-06 05:12:51.920245 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:12:51.920256 | orchestrator | Monday 06 April 2026 05:12:45 +0000 (0:00:00.396) 0:05:15.696 ********** 2026-04-06 05:12:51.920267 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920277 | orchestrator | 2026-04-06 05:12:51.920288 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:12:51.920299 | orchestrator | Monday 06 April 2026 05:12:46 +0000 (0:00:00.129) 0:05:15.826 ********** 2026-04-06 05:12:51.920310 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920321 | orchestrator | 2026-04-06 05:12:51.920331 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:12:51.920343 | orchestrator | Monday 06 April 2026 05:12:46 +0000 (0:00:00.124) 0:05:15.951 ********** 2026-04-06 05:12:51.920354 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920364 | orchestrator | 2026-04-06 05:12:51.920375 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:12:51.920386 | orchestrator | Monday 06 April 2026 05:12:46 +0000 (0:00:00.163) 0:05:16.114 ********** 2026-04-06 05:12:51.920397 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920408 | orchestrator | 2026-04-06 05:12:51.920419 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:12:51.920438 | orchestrator | Monday 06 April 2026 05:12:46 +0000 (0:00:00.156) 0:05:16.271 ********** 2026-04-06 05:12:51.920449 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920460 | orchestrator | 2026-04-06 05:12:51.920471 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:12:51.920482 | orchestrator | Monday 06 April 2026 05:12:46 +0000 (0:00:00.137) 0:05:16.408 ********** 2026-04-06 05:12:51.920493 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920504 | orchestrator | 2026-04-06 05:12:51.920515 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:12:51.920526 | orchestrator | Monday 06 April 2026 05:12:46 +0000 (0:00:00.133) 0:05:16.542 ********** 2026-04-06 05:12:51.920536 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920547 | orchestrator | 2026-04-06 05:12:51.920558 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:12:51.920569 | orchestrator | Monday 06 April 2026 05:12:47 +0000 (0:00:00.195) 0:05:16.737 ********** 2026-04-06 05:12:51.920580 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.920591 | orchestrator | 2026-04-06 05:12:51.920601 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:12:51.920612 | orchestrator | Monday 06 April 2026 05:12:48 +0000 (0:00:01.001) 0:05:17.738 ********** 2026-04-06 05:12:51.920623 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.920634 | orchestrator | 2026-04-06 05:12:51.920645 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:12:51.920655 | orchestrator | Monday 06 April 2026 05:12:49 +0000 (0:00:01.410) 0:05:19.148 ********** 2026-04-06 05:12:51.920672 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-06 05:12:51.920684 | orchestrator | 2026-04-06 05:12:51.920696 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:12:51.920707 | orchestrator | Monday 06 April 2026 05:12:49 +0000 (0:00:00.209) 0:05:19.358 ********** 2026-04-06 05:12:51.920717 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920728 | orchestrator | 2026-04-06 05:12:51.920739 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:12:51.920750 | orchestrator | Monday 06 April 2026 05:12:49 +0000 (0:00:00.126) 0:05:19.485 ********** 2026-04-06 05:12:51.920760 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920771 | orchestrator | 2026-04-06 05:12:51.920782 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:12:51.920793 | orchestrator | Monday 06 April 2026 05:12:50 +0000 (0:00:00.431) 0:05:19.916 ********** 2026-04-06 05:12:51.920804 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:12:51.920815 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:12:51.920825 | orchestrator | 2026-04-06 05:12:51.920837 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:12:51.920847 | orchestrator | Monday 06 April 2026 05:12:51 +0000 (0:00:00.904) 0:05:20.821 ********** 2026-04-06 05:12:51.920858 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:12:51.920869 | orchestrator | 2026-04-06 05:12:51.920880 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:12:51.920891 | orchestrator | Monday 06 April 2026 05:12:51 +0000 (0:00:00.481) 0:05:21.302 ********** 2026-04-06 05:12:51.920902 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920913 | orchestrator | 2026-04-06 05:12:51.920923 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:12:51.920934 | orchestrator | Monday 06 April 2026 05:12:51 +0000 (0:00:00.153) 0:05:21.455 ********** 2026-04-06 05:12:51.920945 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:12:51.920956 | orchestrator | 2026-04-06 05:12:51.920967 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:12:51.920984 | orchestrator | Monday 06 April 2026 05:12:51 +0000 (0:00:00.130) 0:05:21.586 ********** 2026-04-06 05:12:51.921020 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.469391 | orchestrator | 2026-04-06 05:13:05.469513 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:13:05.469540 | orchestrator | Monday 06 April 2026 05:12:51 +0000 (0:00:00.127) 0:05:21.713 ********** 2026-04-06 05:13:05.469561 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-06 05:13:05.469583 | orchestrator | 2026-04-06 05:13:05.469603 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:13:05.469639 | orchestrator | Monday 06 April 2026 05:12:52 +0000 (0:00:00.210) 0:05:21.924 ********** 2026-04-06 05:13:05.469651 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:05.469664 | orchestrator | 2026-04-06 05:13:05.469675 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:13:05.469687 | orchestrator | Monday 06 April 2026 05:12:52 +0000 (0:00:00.730) 0:05:22.655 ********** 2026-04-06 05:13:05.469698 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:13:05.469709 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:13:05.469720 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:13:05.469731 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.469744 | orchestrator | 2026-04-06 05:13:05.469755 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:13:05.469766 | orchestrator | Monday 06 April 2026 05:12:53 +0000 (0:00:00.183) 0:05:22.838 ********** 2026-04-06 05:13:05.469777 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.469788 | orchestrator | 2026-04-06 05:13:05.469799 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:13:05.469810 | orchestrator | Monday 06 April 2026 05:12:53 +0000 (0:00:00.128) 0:05:22.967 ********** 2026-04-06 05:13:05.469821 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.469832 | orchestrator | 2026-04-06 05:13:05.469843 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:13:05.469854 | orchestrator | Monday 06 April 2026 05:12:53 +0000 (0:00:00.179) 0:05:23.147 ********** 2026-04-06 05:13:05.469865 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.469876 | orchestrator | 2026-04-06 05:13:05.469887 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:13:05.469899 | orchestrator | Monday 06 April 2026 05:12:53 +0000 (0:00:00.144) 0:05:23.291 ********** 2026-04-06 05:13:05.469910 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.469921 | orchestrator | 2026-04-06 05:13:05.469935 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:13:05.469947 | orchestrator | Monday 06 April 2026 05:12:53 +0000 (0:00:00.408) 0:05:23.700 ********** 2026-04-06 05:13:05.469961 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.469973 | orchestrator | 2026-04-06 05:13:05.470069 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:13:05.470086 | orchestrator | Monday 06 April 2026 05:12:54 +0000 (0:00:00.156) 0:05:23.857 ********** 2026-04-06 05:13:05.470099 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:05.470112 | orchestrator | 2026-04-06 05:13:05.470125 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:13:05.470138 | orchestrator | Monday 06 April 2026 05:12:55 +0000 (0:00:01.485) 0:05:25.343 ********** 2026-04-06 05:13:05.470151 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:05.470164 | orchestrator | 2026-04-06 05:13:05.470178 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:13:05.470191 | orchestrator | Monday 06 April 2026 05:12:55 +0000 (0:00:00.135) 0:05:25.479 ********** 2026-04-06 05:13:05.470220 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-06 05:13:05.470259 | orchestrator | 2026-04-06 05:13:05.470272 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:13:05.470286 | orchestrator | Monday 06 April 2026 05:12:55 +0000 (0:00:00.214) 0:05:25.694 ********** 2026-04-06 05:13:05.470297 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.470308 | orchestrator | 2026-04-06 05:13:05.470318 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:13:05.470329 | orchestrator | Monday 06 April 2026 05:12:56 +0000 (0:00:00.156) 0:05:25.850 ********** 2026-04-06 05:13:05.470340 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.470351 | orchestrator | 2026-04-06 05:13:05.470361 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:13:05.470372 | orchestrator | Monday 06 April 2026 05:12:56 +0000 (0:00:00.154) 0:05:26.005 ********** 2026-04-06 05:13:05.470383 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.470394 | orchestrator | 2026-04-06 05:13:05.470404 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:13:05.470415 | orchestrator | Monday 06 April 2026 05:12:56 +0000 (0:00:00.152) 0:05:26.157 ********** 2026-04-06 05:13:05.470426 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.470437 | orchestrator | 2026-04-06 05:13:05.470447 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:13:05.470458 | orchestrator | Monday 06 April 2026 05:12:56 +0000 (0:00:00.150) 0:05:26.307 ********** 2026-04-06 05:13:05.470469 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.470479 | orchestrator | 2026-04-06 05:13:05.470490 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:13:05.470501 | orchestrator | Monday 06 April 2026 05:12:56 +0000 (0:00:00.145) 0:05:26.453 ********** 2026-04-06 05:13:05.470512 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.470523 | orchestrator | 2026-04-06 05:13:05.470533 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:13:05.470545 | orchestrator | Monday 06 April 2026 05:12:56 +0000 (0:00:00.146) 0:05:26.599 ********** 2026-04-06 05:13:05.470564 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.470582 | orchestrator | 2026-04-06 05:13:05.470601 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:13:05.470645 | orchestrator | Monday 06 April 2026 05:12:57 +0000 (0:00:00.158) 0:05:26.758 ********** 2026-04-06 05:13:05.470665 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.470680 | orchestrator | 2026-04-06 05:13:05.470692 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:13:05.470703 | orchestrator | Monday 06 April 2026 05:12:57 +0000 (0:00:00.445) 0:05:27.203 ********** 2026-04-06 05:13:05.470714 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:05.470725 | orchestrator | 2026-04-06 05:13:05.470736 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:13:05.470747 | orchestrator | Monday 06 April 2026 05:12:57 +0000 (0:00:00.221) 0:05:27.425 ********** 2026-04-06 05:13:05.470758 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-06 05:13:05.470769 | orchestrator | 2026-04-06 05:13:05.470780 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:13:05.470791 | orchestrator | Monday 06 April 2026 05:12:57 +0000 (0:00:00.219) 0:05:27.644 ********** 2026-04-06 05:13:05.470803 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-06 05:13:05.470814 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-06 05:13:05.470825 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-06 05:13:05.470836 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-06 05:13:05.470847 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-06 05:13:05.470858 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-06 05:13:05.470868 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-06 05:13:05.470889 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:13:05.470900 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:13:05.470911 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:13:05.470922 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:13:05.470933 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:13:05.470944 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:13:05.470954 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:13:05.470965 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-06 05:13:05.470976 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-06 05:13:05.471038 | orchestrator | 2026-04-06 05:13:05.471052 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:13:05.471063 | orchestrator | Monday 06 April 2026 05:13:03 +0000 (0:00:05.721) 0:05:33.365 ********** 2026-04-06 05:13:05.471074 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471085 | orchestrator | 2026-04-06 05:13:05.471095 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:13:05.471106 | orchestrator | Monday 06 April 2026 05:13:03 +0000 (0:00:00.138) 0:05:33.504 ********** 2026-04-06 05:13:05.471117 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471128 | orchestrator | 2026-04-06 05:13:05.471139 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:13:05.471150 | orchestrator | Monday 06 April 2026 05:13:03 +0000 (0:00:00.126) 0:05:33.630 ********** 2026-04-06 05:13:05.471160 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471171 | orchestrator | 2026-04-06 05:13:05.471182 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:13:05.471193 | orchestrator | Monday 06 April 2026 05:13:04 +0000 (0:00:00.138) 0:05:33.768 ********** 2026-04-06 05:13:05.471210 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471221 | orchestrator | 2026-04-06 05:13:05.471235 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:13:05.471254 | orchestrator | Monday 06 April 2026 05:13:04 +0000 (0:00:00.122) 0:05:33.891 ********** 2026-04-06 05:13:05.471269 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471285 | orchestrator | 2026-04-06 05:13:05.471303 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:13:05.471322 | orchestrator | Monday 06 April 2026 05:13:04 +0000 (0:00:00.133) 0:05:34.025 ********** 2026-04-06 05:13:05.471340 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471361 | orchestrator | 2026-04-06 05:13:05.471380 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:13:05.471401 | orchestrator | Monday 06 April 2026 05:13:04 +0000 (0:00:00.122) 0:05:34.147 ********** 2026-04-06 05:13:05.471420 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471439 | orchestrator | 2026-04-06 05:13:05.471460 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:13:05.471479 | orchestrator | Monday 06 April 2026 05:13:04 +0000 (0:00:00.149) 0:05:34.297 ********** 2026-04-06 05:13:05.471494 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471505 | orchestrator | 2026-04-06 05:13:05.471516 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:13:05.471526 | orchestrator | Monday 06 April 2026 05:13:05 +0000 (0:00:00.451) 0:05:34.748 ********** 2026-04-06 05:13:05.471537 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471548 | orchestrator | 2026-04-06 05:13:05.471559 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:13:05.471569 | orchestrator | Monday 06 April 2026 05:13:05 +0000 (0:00:00.122) 0:05:34.871 ********** 2026-04-06 05:13:05.471580 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471600 | orchestrator | 2026-04-06 05:13:05.471618 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:13:05.471636 | orchestrator | Monday 06 April 2026 05:13:05 +0000 (0:00:00.179) 0:05:35.051 ********** 2026-04-06 05:13:05.471656 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:05.471676 | orchestrator | 2026-04-06 05:13:05.471707 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:13:23.809924 | orchestrator | Monday 06 April 2026 05:13:05 +0000 (0:00:00.127) 0:05:35.178 ********** 2026-04-06 05:13:23.810151 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810171 | orchestrator | 2026-04-06 05:13:23.810184 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:13:23.810195 | orchestrator | Monday 06 April 2026 05:13:05 +0000 (0:00:00.132) 0:05:35.311 ********** 2026-04-06 05:13:23.810207 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810218 | orchestrator | 2026-04-06 05:13:23.810229 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:13:23.810240 | orchestrator | Monday 06 April 2026 05:13:05 +0000 (0:00:00.239) 0:05:35.550 ********** 2026-04-06 05:13:23.810251 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810262 | orchestrator | 2026-04-06 05:13:23.810273 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:13:23.810284 | orchestrator | Monday 06 April 2026 05:13:05 +0000 (0:00:00.131) 0:05:35.682 ********** 2026-04-06 05:13:23.810295 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810306 | orchestrator | 2026-04-06 05:13:23.810317 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:13:23.810328 | orchestrator | Monday 06 April 2026 05:13:06 +0000 (0:00:00.224) 0:05:35.906 ********** 2026-04-06 05:13:23.810339 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810350 | orchestrator | 2026-04-06 05:13:23.810361 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:13:23.810371 | orchestrator | Monday 06 April 2026 05:13:06 +0000 (0:00:00.135) 0:05:36.041 ********** 2026-04-06 05:13:23.810382 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810393 | orchestrator | 2026-04-06 05:13:23.810405 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:13:23.810417 | orchestrator | Monday 06 April 2026 05:13:06 +0000 (0:00:00.152) 0:05:36.194 ********** 2026-04-06 05:13:23.810428 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810439 | orchestrator | 2026-04-06 05:13:23.810450 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:13:23.810463 | orchestrator | Monday 06 April 2026 05:13:06 +0000 (0:00:00.142) 0:05:36.336 ********** 2026-04-06 05:13:23.810476 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810488 | orchestrator | 2026-04-06 05:13:23.810501 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:13:23.810514 | orchestrator | Monday 06 April 2026 05:13:06 +0000 (0:00:00.140) 0:05:36.477 ********** 2026-04-06 05:13:23.810527 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810541 | orchestrator | 2026-04-06 05:13:23.810553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:13:23.810566 | orchestrator | Monday 06 April 2026 05:13:06 +0000 (0:00:00.139) 0:05:36.616 ********** 2026-04-06 05:13:23.810579 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810591 | orchestrator | 2026-04-06 05:13:23.810605 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:13:23.810617 | orchestrator | Monday 06 April 2026 05:13:07 +0000 (0:00:00.131) 0:05:36.748 ********** 2026-04-06 05:13:23.810629 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-06 05:13:23.810642 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-06 05:13:23.810655 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-06 05:13:23.810693 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810707 | orchestrator | 2026-04-06 05:13:23.810719 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:13:23.810746 | orchestrator | Monday 06 April 2026 05:13:08 +0000 (0:00:01.013) 0:05:37.761 ********** 2026-04-06 05:13:23.810759 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-06 05:13:23.810772 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-06 05:13:23.810784 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-06 05:13:23.810797 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810810 | orchestrator | 2026-04-06 05:13:23.810821 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:13:23.810832 | orchestrator | Monday 06 April 2026 05:13:08 +0000 (0:00:00.431) 0:05:38.193 ********** 2026-04-06 05:13:23.810842 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-06 05:13:23.810853 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-06 05:13:23.810864 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-06 05:13:23.810874 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810885 | orchestrator | 2026-04-06 05:13:23.810896 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:13:23.810907 | orchestrator | Monday 06 April 2026 05:13:08 +0000 (0:00:00.434) 0:05:38.628 ********** 2026-04-06 05:13:23.810918 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.810928 | orchestrator | 2026-04-06 05:13:23.810939 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:13:23.810950 | orchestrator | Monday 06 April 2026 05:13:09 +0000 (0:00:00.162) 0:05:38.791 ********** 2026-04-06 05:13:23.810961 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-06 05:13:23.810972 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.811001 | orchestrator | 2026-04-06 05:13:23.811013 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:13:23.811024 | orchestrator | Monday 06 April 2026 05:13:09 +0000 (0:00:00.330) 0:05:39.121 ********** 2026-04-06 05:13:23.811034 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:13:23.811045 | orchestrator | 2026-04-06 05:13:23.811056 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:13:23.811066 | orchestrator | Monday 06 April 2026 05:13:10 +0000 (0:00:00.852) 0:05:39.974 ********** 2026-04-06 05:13:23.811077 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:23.811088 | orchestrator | 2026-04-06 05:13:23.811099 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-06 05:13:23.811127 | orchestrator | Monday 06 April 2026 05:13:10 +0000 (0:00:00.185) 0:05:40.159 ********** 2026-04-06 05:13:23.811138 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-04-06 05:13:23.811150 | orchestrator | 2026-04-06 05:13:23.811160 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-06 05:13:23.811171 | orchestrator | Monday 06 April 2026 05:13:10 +0000 (0:00:00.254) 0:05:40.413 ********** 2026-04-06 05:13:23.811182 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:13:23.811192 | orchestrator | 2026-04-06 05:13:23.811203 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-06 05:13:23.811214 | orchestrator | Monday 06 April 2026 05:13:12 +0000 (0:00:02.111) 0:05:42.525 ********** 2026-04-06 05:13:23.811225 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.811236 | orchestrator | 2026-04-06 05:13:23.811247 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-06 05:13:23.811257 | orchestrator | Monday 06 April 2026 05:13:12 +0000 (0:00:00.186) 0:05:42.711 ********** 2026-04-06 05:13:23.811268 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:23.811279 | orchestrator | 2026-04-06 05:13:23.811290 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-06 05:13:23.811310 | orchestrator | Monday 06 April 2026 05:13:13 +0000 (0:00:00.469) 0:05:43.181 ********** 2026-04-06 05:13:23.811321 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:23.811332 | orchestrator | 2026-04-06 05:13:23.811343 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-06 05:13:23.811353 | orchestrator | Monday 06 April 2026 05:13:13 +0000 (0:00:00.176) 0:05:43.358 ********** 2026-04-06 05:13:23.811364 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:13:23.811375 | orchestrator | 2026-04-06 05:13:23.811386 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-06 05:13:23.811397 | orchestrator | Monday 06 April 2026 05:13:14 +0000 (0:00:01.107) 0:05:44.465 ********** 2026-04-06 05:13:23.811408 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:23.811419 | orchestrator | 2026-04-06 05:13:23.811429 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-06 05:13:23.811440 | orchestrator | Monday 06 April 2026 05:13:15 +0000 (0:00:00.602) 0:05:45.068 ********** 2026-04-06 05:13:23.811451 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:23.811462 | orchestrator | 2026-04-06 05:13:23.811472 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-06 05:13:23.811483 | orchestrator | Monday 06 April 2026 05:13:15 +0000 (0:00:00.439) 0:05:45.507 ********** 2026-04-06 05:13:23.811494 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:23.811504 | orchestrator | 2026-04-06 05:13:23.811515 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-06 05:13:23.811526 | orchestrator | Monday 06 April 2026 05:13:16 +0000 (0:00:00.502) 0:05:46.010 ********** 2026-04-06 05:13:23.811537 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:13:23.811547 | orchestrator | 2026-04-06 05:13:23.811558 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-06 05:13:23.811569 | orchestrator | Monday 06 April 2026 05:13:16 +0000 (0:00:00.597) 0:05:46.608 ********** 2026-04-06 05:13:23.811579 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:13:23.811590 | orchestrator | 2026-04-06 05:13:23.811601 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-06 05:13:23.811612 | orchestrator | Monday 06 April 2026 05:13:17 +0000 (0:00:00.538) 0:05:47.146 ********** 2026-04-06 05:13:23.811623 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:13:23.811639 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-06 05:13:23.811650 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 05:13:23.811661 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-06 05:13:23.811672 | orchestrator | 2026-04-06 05:13:23.811682 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-06 05:13:23.811693 | orchestrator | Monday 06 April 2026 05:13:20 +0000 (0:00:02.853) 0:05:49.999 ********** 2026-04-06 05:13:23.811704 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:13:23.811715 | orchestrator | 2026-04-06 05:13:23.811726 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-06 05:13:23.811737 | orchestrator | Monday 06 April 2026 05:13:21 +0000 (0:00:01.036) 0:05:51.035 ********** 2026-04-06 05:13:23.811748 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:23.811758 | orchestrator | 2026-04-06 05:13:23.811769 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-06 05:13:23.811780 | orchestrator | Monday 06 April 2026 05:13:21 +0000 (0:00:00.138) 0:05:51.174 ********** 2026-04-06 05:13:23.811791 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:23.811801 | orchestrator | 2026-04-06 05:13:23.811812 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-06 05:13:23.811823 | orchestrator | Monday 06 April 2026 05:13:21 +0000 (0:00:00.145) 0:05:51.320 ********** 2026-04-06 05:13:23.811834 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:23.811845 | orchestrator | 2026-04-06 05:13:23.811856 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-06 05:13:23.811873 | orchestrator | Monday 06 April 2026 05:13:22 +0000 (0:00:01.073) 0:05:52.393 ********** 2026-04-06 05:13:23.811884 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:13:23.811895 | orchestrator | 2026-04-06 05:13:23.811906 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-06 05:13:23.811916 | orchestrator | Monday 06 April 2026 05:13:23 +0000 (0:00:00.789) 0:05:53.183 ********** 2026-04-06 05:13:23.811927 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:13:23.811938 | orchestrator | 2026-04-06 05:13:23.811949 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-06 05:13:23.811959 | orchestrator | Monday 06 April 2026 05:13:23 +0000 (0:00:00.126) 0:05:53.310 ********** 2026-04-06 05:13:23.811970 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-04-06 05:13:23.812019 | orchestrator | 2026-04-06 05:13:23.812038 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-06 05:14:10.976819 | orchestrator | Monday 06 April 2026 05:13:23 +0000 (0:00:00.208) 0:05:53.519 ********** 2026-04-06 05:14:10.976905 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.976915 | orchestrator | 2026-04-06 05:14:10.976922 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-06 05:14:10.976927 | orchestrator | Monday 06 April 2026 05:13:23 +0000 (0:00:00.142) 0:05:53.661 ********** 2026-04-06 05:14:10.976933 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.976938 | orchestrator | 2026-04-06 05:14:10.976944 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-06 05:14:10.976950 | orchestrator | Monday 06 April 2026 05:13:24 +0000 (0:00:00.133) 0:05:53.795 ********** 2026-04-06 05:14:10.976955 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-04-06 05:14:10.976960 | orchestrator | 2026-04-06 05:14:10.976966 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-06 05:14:10.976990 | orchestrator | Monday 06 April 2026 05:13:24 +0000 (0:00:00.212) 0:05:54.007 ********** 2026-04-06 05:14:10.976996 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:14:10.977001 | orchestrator | 2026-04-06 05:14:10.977006 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-06 05:14:10.977012 | orchestrator | Monday 06 April 2026 05:13:25 +0000 (0:00:01.390) 0:05:55.398 ********** 2026-04-06 05:14:10.977017 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:14:10.977023 | orchestrator | 2026-04-06 05:14:10.977028 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-06 05:14:10.977033 | orchestrator | Monday 06 April 2026 05:13:26 +0000 (0:00:00.994) 0:05:56.393 ********** 2026-04-06 05:14:10.977039 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:14:10.977044 | orchestrator | 2026-04-06 05:14:10.977049 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-06 05:14:10.977054 | orchestrator | Monday 06 April 2026 05:13:28 +0000 (0:00:01.423) 0:05:57.816 ********** 2026-04-06 05:14:10.977059 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:14:10.977065 | orchestrator | 2026-04-06 05:14:10.977070 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-06 05:14:10.977075 | orchestrator | Monday 06 April 2026 05:13:30 +0000 (0:00:02.162) 0:05:59.979 ********** 2026-04-06 05:14:10.977080 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-04-06 05:14:10.977086 | orchestrator | 2026-04-06 05:14:10.977091 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-06 05:14:10.977096 | orchestrator | Monday 06 April 2026 05:13:30 +0000 (0:00:00.527) 0:06:00.506 ********** 2026-04-06 05:14:10.977101 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-06 05:14:10.977107 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:14:10.977112 | orchestrator | 2026-04-06 05:14:10.977117 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-06 05:14:10.977140 | orchestrator | Monday 06 April 2026 05:13:52 +0000 (0:00:21.888) 0:06:22.395 ********** 2026-04-06 05:14:10.977146 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:14:10.977151 | orchestrator | 2026-04-06 05:14:10.977156 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-06 05:14:10.977161 | orchestrator | Monday 06 April 2026 05:13:54 +0000 (0:00:02.003) 0:06:24.399 ********** 2026-04-06 05:14:10.977166 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.977171 | orchestrator | 2026-04-06 05:14:10.977176 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-06 05:14:10.977192 | orchestrator | Monday 06 April 2026 05:13:54 +0000 (0:00:00.146) 0:06:24.545 ********** 2026-04-06 05:14:10.977199 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-06 05:14:10.977206 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-06 05:14:10.977212 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-06 05:14:10.977217 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-06 05:14:10.977235 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-06 05:14:10.977241 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}])  2026-04-06 05:14:10.977248 | orchestrator | 2026-04-06 05:14:10.977253 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-06 05:14:10.977259 | orchestrator | Monday 06 April 2026 05:14:03 +0000 (0:00:09.048) 0:06:33.594 ********** 2026-04-06 05:14:10.977264 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:14:10.977269 | orchestrator | 2026-04-06 05:14:10.977274 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:14:10.977279 | orchestrator | Monday 06 April 2026 05:14:05 +0000 (0:00:01.538) 0:06:35.132 ********** 2026-04-06 05:14:10.977285 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:14:10.977290 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-06 05:14:10.977295 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-06 05:14:10.977305 | orchestrator | 2026-04-06 05:14:10.977310 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:14:10.977315 | orchestrator | Monday 06 April 2026 05:14:06 +0000 (0:00:01.186) 0:06:36.319 ********** 2026-04-06 05:14:10.977320 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 05:14:10.977326 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 05:14:10.977331 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 05:14:10.977336 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.977341 | orchestrator | 2026-04-06 05:14:10.977346 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-06 05:14:10.977351 | orchestrator | Monday 06 April 2026 05:14:07 +0000 (0:00:00.501) 0:06:36.820 ********** 2026-04-06 05:14:10.977356 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.977361 | orchestrator | 2026-04-06 05:14:10.977366 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-06 05:14:10.977371 | orchestrator | Monday 06 April 2026 05:14:07 +0000 (0:00:00.142) 0:06:36.962 ********** 2026-04-06 05:14:10.977378 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:14:10.977384 | orchestrator | 2026-04-06 05:14:10.977390 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 05:14:10.977396 | orchestrator | Monday 06 April 2026 05:14:08 +0000 (0:00:01.339) 0:06:38.302 ********** 2026-04-06 05:14:10.977402 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.977408 | orchestrator | 2026-04-06 05:14:10.977414 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-06 05:14:10.977423 | orchestrator | Monday 06 April 2026 05:14:09 +0000 (0:00:00.418) 0:06:38.720 ********** 2026-04-06 05:14:10.977430 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.977435 | orchestrator | 2026-04-06 05:14:10.977442 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-06 05:14:10.977448 | orchestrator | Monday 06 April 2026 05:14:09 +0000 (0:00:00.132) 0:06:38.853 ********** 2026-04-06 05:14:10.977453 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.977459 | orchestrator | 2026-04-06 05:14:10.977465 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-06 05:14:10.977471 | orchestrator | Monday 06 April 2026 05:14:09 +0000 (0:00:00.149) 0:06:39.003 ********** 2026-04-06 05:14:10.977477 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.977483 | orchestrator | 2026-04-06 05:14:10.977489 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-06 05:14:10.977495 | orchestrator | Monday 06 April 2026 05:14:09 +0000 (0:00:00.139) 0:06:39.142 ********** 2026-04-06 05:14:10.977501 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.977507 | orchestrator | 2026-04-06 05:14:10.977513 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-06 05:14:10.977519 | orchestrator | Monday 06 April 2026 05:14:09 +0000 (0:00:00.136) 0:06:39.279 ********** 2026-04-06 05:14:10.977525 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.977531 | orchestrator | 2026-04-06 05:14:10.977537 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-06 05:14:10.977543 | orchestrator | Monday 06 April 2026 05:14:09 +0000 (0:00:00.124) 0:06:39.403 ********** 2026-04-06 05:14:10.977549 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:14:10.977555 | orchestrator | 2026-04-06 05:14:10.977560 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-06 05:14:10.977566 | orchestrator | 2026-04-06 05:14:10.977572 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-06 05:14:10.977578 | orchestrator | Monday 06 April 2026 05:14:10 +0000 (0:00:00.624) 0:06:40.028 ********** 2026-04-06 05:14:10.977584 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:10.977590 | orchestrator | 2026-04-06 05:14:10.977596 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-06 05:14:10.977606 | orchestrator | Monday 06 April 2026 05:14:10 +0000 (0:00:00.481) 0:06:40.509 ********** 2026-04-06 05:14:10.977612 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:10.977618 | orchestrator | 2026-04-06 05:14:10.977624 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-06 05:14:10.977634 | orchestrator | Monday 06 April 2026 05:14:10 +0000 (0:00:00.177) 0:06:40.687 ********** 2026-04-06 05:14:19.448297 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:19.448410 | orchestrator | 2026-04-06 05:14:19.448426 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-06 05:14:19.448440 | orchestrator | Monday 06 April 2026 05:14:11 +0000 (0:00:00.123) 0:06:40.810 ********** 2026-04-06 05:14:19.448451 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:19.448463 | orchestrator | 2026-04-06 05:14:19.448475 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:14:19.448486 | orchestrator | Monday 06 April 2026 05:14:11 +0000 (0:00:00.143) 0:06:40.954 ********** 2026-04-06 05:14:19.448497 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-06 05:14:19.448508 | orchestrator | 2026-04-06 05:14:19.448518 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:14:19.448529 | orchestrator | Monday 06 April 2026 05:14:11 +0000 (0:00:00.551) 0:06:41.505 ********** 2026-04-06 05:14:19.448540 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:19.448551 | orchestrator | 2026-04-06 05:14:19.448562 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:14:19.448572 | orchestrator | Monday 06 April 2026 05:14:12 +0000 (0:00:00.449) 0:06:41.955 ********** 2026-04-06 05:14:19.448583 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:19.448594 | orchestrator | 2026-04-06 05:14:19.448604 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:14:19.448615 | orchestrator | Monday 06 April 2026 05:14:12 +0000 (0:00:00.150) 0:06:42.105 ********** 2026-04-06 05:14:19.448626 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:19.448637 | orchestrator | 2026-04-06 05:14:19.448648 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:14:19.448660 | orchestrator | Monday 06 April 2026 05:14:12 +0000 (0:00:00.465) 0:06:42.571 ********** 2026-04-06 05:14:19.448670 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:19.448681 | orchestrator | 2026-04-06 05:14:19.448692 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:14:19.448703 | orchestrator | Monday 06 April 2026 05:14:13 +0000 (0:00:00.156) 0:06:42.727 ********** 2026-04-06 05:14:19.448713 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:19.448724 | orchestrator | 2026-04-06 05:14:19.448735 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:14:19.448746 | orchestrator | Monday 06 April 2026 05:14:13 +0000 (0:00:00.178) 0:06:42.906 ********** 2026-04-06 05:14:19.448756 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:19.448767 | orchestrator | 2026-04-06 05:14:19.448778 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:14:19.448790 | orchestrator | Monday 06 April 2026 05:14:13 +0000 (0:00:00.173) 0:06:43.080 ********** 2026-04-06 05:14:19.448800 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:19.448811 | orchestrator | 2026-04-06 05:14:19.448822 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:14:19.448837 | orchestrator | Monday 06 April 2026 05:14:13 +0000 (0:00:00.154) 0:06:43.234 ********** 2026-04-06 05:14:19.448851 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:19.448864 | orchestrator | 2026-04-06 05:14:19.448876 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:14:19.448888 | orchestrator | Monday 06 April 2026 05:14:13 +0000 (0:00:00.181) 0:06:43.416 ********** 2026-04-06 05:14:19.448901 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:14:19.448931 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:14:19.449005 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:14:19.449021 | orchestrator | 2026-04-06 05:14:19.449034 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:14:19.449047 | orchestrator | Monday 06 April 2026 05:14:14 +0000 (0:00:00.968) 0:06:44.384 ********** 2026-04-06 05:14:19.449059 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:19.449072 | orchestrator | 2026-04-06 05:14:19.449084 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:14:19.449097 | orchestrator | Monday 06 April 2026 05:14:14 +0000 (0:00:00.257) 0:06:44.642 ********** 2026-04-06 05:14:19.449109 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:14:19.449122 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:14:19.449135 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:14:19.449147 | orchestrator | 2026-04-06 05:14:19.449160 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:14:19.449173 | orchestrator | Monday 06 April 2026 05:14:17 +0000 (0:00:02.163) 0:06:46.805 ********** 2026-04-06 05:14:19.449185 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 05:14:19.449198 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 05:14:19.449212 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 05:14:19.449223 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:19.449234 | orchestrator | 2026-04-06 05:14:19.449245 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:14:19.449255 | orchestrator | Monday 06 April 2026 05:14:17 +0000 (0:00:00.736) 0:06:47.542 ********** 2026-04-06 05:14:19.449268 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:14:19.449282 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:14:19.449312 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:14:19.449324 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:19.449334 | orchestrator | 2026-04-06 05:14:19.449346 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:14:19.449356 | orchestrator | Monday 06 April 2026 05:14:18 +0000 (0:00:00.978) 0:06:48.521 ********** 2026-04-06 05:14:19.449370 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:19.449383 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:19.449395 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:19.449417 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:19.449428 | orchestrator | 2026-04-06 05:14:19.449439 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:14:19.449450 | orchestrator | Monday 06 April 2026 05:14:19 +0000 (0:00:00.529) 0:06:49.051 ********** 2026-04-06 05:14:19.449469 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:14:15.446963', 'end': '2026-04-06 05:14:15.512189', 'delta': '0:00:00.065226', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:14:19.449484 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:14:16.347048', 'end': '2026-04-06 05:14:16.395408', 'delta': '0:00:00.048360', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:14:19.449505 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'a87eea657fd7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:14:16.887192', 'end': '2026-04-06 05:14:16.943201', 'delta': '0:00:00.056009', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a87eea657fd7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:14:23.205931 | orchestrator | 2026-04-06 05:14:23.206174 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:14:23.206196 | orchestrator | Monday 06 April 2026 05:14:19 +0000 (0:00:00.206) 0:06:49.257 ********** 2026-04-06 05:14:23.206208 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:23.206221 | orchestrator | 2026-04-06 05:14:23.206232 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:14:23.206243 | orchestrator | Monday 06 April 2026 05:14:19 +0000 (0:00:00.288) 0:06:49.545 ********** 2026-04-06 05:14:23.206254 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.206267 | orchestrator | 2026-04-06 05:14:23.206278 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:14:23.206289 | orchestrator | Monday 06 April 2026 05:14:20 +0000 (0:00:00.259) 0:06:49.804 ********** 2026-04-06 05:14:23.206300 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:23.206310 | orchestrator | 2026-04-06 05:14:23.206322 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:14:23.206333 | orchestrator | Monday 06 April 2026 05:14:20 +0000 (0:00:00.168) 0:06:49.973 ********** 2026-04-06 05:14:23.206366 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:14:23.206378 | orchestrator | 2026-04-06 05:14:23.206389 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:14:23.206400 | orchestrator | Monday 06 April 2026 05:14:21 +0000 (0:00:00.994) 0:06:50.968 ********** 2026-04-06 05:14:23.206410 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:23.206421 | orchestrator | 2026-04-06 05:14:23.206432 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:14:23.206444 | orchestrator | Monday 06 April 2026 05:14:21 +0000 (0:00:00.162) 0:06:51.130 ********** 2026-04-06 05:14:23.206455 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.206466 | orchestrator | 2026-04-06 05:14:23.206479 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:14:23.206492 | orchestrator | Monday 06 April 2026 05:14:21 +0000 (0:00:00.124) 0:06:51.255 ********** 2026-04-06 05:14:23.206504 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.206517 | orchestrator | 2026-04-06 05:14:23.206530 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:14:23.206542 | orchestrator | Monday 06 April 2026 05:14:21 +0000 (0:00:00.239) 0:06:51.494 ********** 2026-04-06 05:14:23.206555 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.206567 | orchestrator | 2026-04-06 05:14:23.206580 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:14:23.206592 | orchestrator | Monday 06 April 2026 05:14:21 +0000 (0:00:00.144) 0:06:51.638 ********** 2026-04-06 05:14:23.206605 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.206617 | orchestrator | 2026-04-06 05:14:23.206629 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:14:23.206642 | orchestrator | Monday 06 April 2026 05:14:22 +0000 (0:00:00.152) 0:06:51.791 ********** 2026-04-06 05:14:23.206654 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.206666 | orchestrator | 2026-04-06 05:14:23.206679 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:14:23.206691 | orchestrator | Monday 06 April 2026 05:14:22 +0000 (0:00:00.130) 0:06:51.921 ********** 2026-04-06 05:14:23.206704 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.206716 | orchestrator | 2026-04-06 05:14:23.206741 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:14:23.206754 | orchestrator | Monday 06 April 2026 05:14:22 +0000 (0:00:00.151) 0:06:52.073 ********** 2026-04-06 05:14:23.206767 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.206779 | orchestrator | 2026-04-06 05:14:23.206793 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:14:23.206806 | orchestrator | Monday 06 April 2026 05:14:22 +0000 (0:00:00.420) 0:06:52.494 ********** 2026-04-06 05:14:23.206819 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.206831 | orchestrator | 2026-04-06 05:14:23.206844 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:14:23.206857 | orchestrator | Monday 06 April 2026 05:14:22 +0000 (0:00:00.149) 0:06:52.643 ********** 2026-04-06 05:14:23.206870 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.206883 | orchestrator | 2026-04-06 05:14:23.206896 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:14:23.206906 | orchestrator | Monday 06 April 2026 05:14:23 +0000 (0:00:00.153) 0:06:52.796 ********** 2026-04-06 05:14:23.206920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:14:23.206933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:14:23.206996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:14:23.207011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:14:23.207024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:14:23.207036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:14:23.207047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:14:23.207073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a86fd0c9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:14:23.438176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:14:23.438334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:14:23.438363 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:23.438386 | orchestrator | 2026-04-06 05:14:23.438406 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:14:23.438425 | orchestrator | Monday 06 April 2026 05:14:23 +0000 (0:00:00.233) 0:06:53.030 ********** 2026-04-06 05:14:23.438447 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:23.438479 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:23.438498 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:23.438518 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:23.438594 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:23.438616 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:23.438635 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:23.438669 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a86fd0c9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:23.438720 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:37.262486 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:14:37.262638 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.262657 | orchestrator | 2026-04-06 05:14:37.262670 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:14:37.262682 | orchestrator | Monday 06 April 2026 05:14:23 +0000 (0:00:00.242) 0:06:53.272 ********** 2026-04-06 05:14:37.262693 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:37.262705 | orchestrator | 2026-04-06 05:14:37.262717 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:14:37.262728 | orchestrator | Monday 06 April 2026 05:14:24 +0000 (0:00:00.523) 0:06:53.796 ********** 2026-04-06 05:14:37.262739 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:37.262750 | orchestrator | 2026-04-06 05:14:37.262761 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:14:37.262772 | orchestrator | Monday 06 April 2026 05:14:24 +0000 (0:00:00.129) 0:06:53.925 ********** 2026-04-06 05:14:37.262783 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:37.262794 | orchestrator | 2026-04-06 05:14:37.262806 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:14:37.262817 | orchestrator | Monday 06 April 2026 05:14:24 +0000 (0:00:00.501) 0:06:54.427 ********** 2026-04-06 05:14:37.262827 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.262839 | orchestrator | 2026-04-06 05:14:37.262850 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:14:37.262860 | orchestrator | Monday 06 April 2026 05:14:24 +0000 (0:00:00.136) 0:06:54.564 ********** 2026-04-06 05:14:37.262871 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.262882 | orchestrator | 2026-04-06 05:14:37.262893 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:14:37.262905 | orchestrator | Monday 06 April 2026 05:14:25 +0000 (0:00:00.243) 0:06:54.807 ********** 2026-04-06 05:14:37.262948 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.262984 | orchestrator | 2026-04-06 05:14:37.262999 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:14:37.263029 | orchestrator | Monday 06 April 2026 05:14:25 +0000 (0:00:00.132) 0:06:54.940 ********** 2026-04-06 05:14:37.263043 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-06 05:14:37.263057 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-06 05:14:37.263069 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:14:37.263082 | orchestrator | 2026-04-06 05:14:37.263096 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:14:37.263107 | orchestrator | Monday 06 April 2026 05:14:26 +0000 (0:00:00.993) 0:06:55.934 ********** 2026-04-06 05:14:37.263118 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 05:14:37.263130 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 05:14:37.263140 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 05:14:37.263152 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.263163 | orchestrator | 2026-04-06 05:14:37.263174 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:14:37.263185 | orchestrator | Monday 06 April 2026 05:14:26 +0000 (0:00:00.180) 0:06:56.114 ********** 2026-04-06 05:14:37.263196 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.263206 | orchestrator | 2026-04-06 05:14:37.263217 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:14:37.263228 | orchestrator | Monday 06 April 2026 05:14:26 +0000 (0:00:00.446) 0:06:56.560 ********** 2026-04-06 05:14:37.263240 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:14:37.263251 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:14:37.263262 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:14:37.263273 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:14:37.263284 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:14:37.263295 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:14:37.263306 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:14:37.263317 | orchestrator | 2026-04-06 05:14:37.263328 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:14:37.263339 | orchestrator | Monday 06 April 2026 05:14:27 +0000 (0:00:00.801) 0:06:57.361 ********** 2026-04-06 05:14:37.263350 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:14:37.263361 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:14:37.263372 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:14:37.263382 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:14:37.263412 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:14:37.263424 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:14:37.263459 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:14:37.263471 | orchestrator | 2026-04-06 05:14:37.263482 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-06 05:14:37.263493 | orchestrator | Monday 06 April 2026 05:14:29 +0000 (0:00:01.671) 0:06:59.033 ********** 2026-04-06 05:14:37.263504 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.263515 | orchestrator | 2026-04-06 05:14:37.263525 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-06 05:14:37.263536 | orchestrator | Monday 06 April 2026 05:14:29 +0000 (0:00:00.237) 0:06:59.271 ********** 2026-04-06 05:14:37.263556 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.263567 | orchestrator | 2026-04-06 05:14:37.263578 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-06 05:14:37.263589 | orchestrator | Monday 06 April 2026 05:14:29 +0000 (0:00:00.234) 0:06:59.506 ********** 2026-04-06 05:14:37.263599 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.263610 | orchestrator | 2026-04-06 05:14:37.263621 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-06 05:14:37.263632 | orchestrator | Monday 06 April 2026 05:14:29 +0000 (0:00:00.141) 0:06:59.647 ********** 2026-04-06 05:14:37.263643 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.263654 | orchestrator | 2026-04-06 05:14:37.263664 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-06 05:14:37.263675 | orchestrator | Monday 06 April 2026 05:14:30 +0000 (0:00:00.228) 0:06:59.876 ********** 2026-04-06 05:14:37.263686 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.263697 | orchestrator | 2026-04-06 05:14:37.263708 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-06 05:14:37.263718 | orchestrator | Monday 06 April 2026 05:14:30 +0000 (0:00:00.143) 0:07:00.019 ********** 2026-04-06 05:14:37.263729 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 05:14:37.263740 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 05:14:37.263750 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 05:14:37.263761 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.263772 | orchestrator | 2026-04-06 05:14:37.263783 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-06 05:14:37.263793 | orchestrator | Monday 06 April 2026 05:14:30 +0000 (0:00:00.442) 0:07:00.462 ********** 2026-04-06 05:14:37.263804 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-06 05:14:37.263820 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-06 05:14:37.263832 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-06 05:14:37.263842 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-06 05:14:37.263853 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-06 05:14:37.263864 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-06 05:14:37.263875 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.263886 | orchestrator | 2026-04-06 05:14:37.263897 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-06 05:14:37.263907 | orchestrator | Monday 06 April 2026 05:14:31 +0000 (0:00:00.994) 0:07:01.456 ********** 2026-04-06 05:14:37.263918 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:14:37.263929 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:14:37.263940 | orchestrator | 2026-04-06 05:14:37.263951 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-06 05:14:37.263978 | orchestrator | Monday 06 April 2026 05:14:34 +0000 (0:00:02.577) 0:07:04.033 ********** 2026-04-06 05:14:37.263990 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:14:37.264001 | orchestrator | 2026-04-06 05:14:37.264012 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:14:37.264023 | orchestrator | Monday 06 April 2026 05:14:35 +0000 (0:00:01.425) 0:07:05.459 ********** 2026-04-06 05:14:37.264034 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-06 05:14:37.264045 | orchestrator | 2026-04-06 05:14:37.264056 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:14:37.264067 | orchestrator | Monday 06 April 2026 05:14:36 +0000 (0:00:00.508) 0:07:05.968 ********** 2026-04-06 05:14:37.264078 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-06 05:14:37.264096 | orchestrator | 2026-04-06 05:14:37.264108 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:14:37.264118 | orchestrator | Monday 06 April 2026 05:14:36 +0000 (0:00:00.211) 0:07:06.179 ********** 2026-04-06 05:14:37.264129 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:37.264141 | orchestrator | 2026-04-06 05:14:37.264152 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:14:37.264162 | orchestrator | Monday 06 April 2026 05:14:36 +0000 (0:00:00.524) 0:07:06.704 ********** 2026-04-06 05:14:37.264173 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.264184 | orchestrator | 2026-04-06 05:14:37.264195 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:14:37.264206 | orchestrator | Monday 06 April 2026 05:14:37 +0000 (0:00:00.135) 0:07:06.839 ********** 2026-04-06 05:14:37.264216 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:37.264227 | orchestrator | 2026-04-06 05:14:37.264238 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:14:37.264255 | orchestrator | Monday 06 April 2026 05:14:37 +0000 (0:00:00.131) 0:07:06.970 ********** 2026-04-06 05:14:48.577468 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.577594 | orchestrator | 2026-04-06 05:14:48.577611 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:14:48.577624 | orchestrator | Monday 06 April 2026 05:14:37 +0000 (0:00:00.135) 0:07:07.106 ********** 2026-04-06 05:14:48.577636 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.577648 | orchestrator | 2026-04-06 05:14:48.577659 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:14:48.577671 | orchestrator | Monday 06 April 2026 05:14:37 +0000 (0:00:00.508) 0:07:07.615 ********** 2026-04-06 05:14:48.577682 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.577693 | orchestrator | 2026-04-06 05:14:48.577704 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:14:48.577715 | orchestrator | Monday 06 April 2026 05:14:38 +0000 (0:00:00.160) 0:07:07.776 ********** 2026-04-06 05:14:48.577726 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.577737 | orchestrator | 2026-04-06 05:14:48.577748 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:14:48.577759 | orchestrator | Monday 06 April 2026 05:14:38 +0000 (0:00:00.118) 0:07:07.894 ********** 2026-04-06 05:14:48.577770 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.577781 | orchestrator | 2026-04-06 05:14:48.577792 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:14:48.577803 | orchestrator | Monday 06 April 2026 05:14:38 +0000 (0:00:00.530) 0:07:08.424 ********** 2026-04-06 05:14:48.577814 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.577825 | orchestrator | 2026-04-06 05:14:48.577836 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:14:48.577847 | orchestrator | Monday 06 April 2026 05:14:39 +0000 (0:00:00.535) 0:07:08.960 ********** 2026-04-06 05:14:48.577858 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.577869 | orchestrator | 2026-04-06 05:14:48.577880 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:14:48.577891 | orchestrator | Monday 06 April 2026 05:14:39 +0000 (0:00:00.370) 0:07:09.330 ********** 2026-04-06 05:14:48.577902 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.577913 | orchestrator | 2026-04-06 05:14:48.577924 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:14:48.577935 | orchestrator | Monday 06 April 2026 05:14:39 +0000 (0:00:00.158) 0:07:09.489 ********** 2026-04-06 05:14:48.577946 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.577977 | orchestrator | 2026-04-06 05:14:48.577991 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:14:48.578004 | orchestrator | Monday 06 April 2026 05:14:39 +0000 (0:00:00.123) 0:07:09.612 ********** 2026-04-06 05:14:48.578104 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578121 | orchestrator | 2026-04-06 05:14:48.578133 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:14:48.578171 | orchestrator | Monday 06 April 2026 05:14:40 +0000 (0:00:00.130) 0:07:09.743 ********** 2026-04-06 05:14:48.578185 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578198 | orchestrator | 2026-04-06 05:14:48.578211 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:14:48.578224 | orchestrator | Monday 06 April 2026 05:14:40 +0000 (0:00:00.140) 0:07:09.883 ********** 2026-04-06 05:14:48.578236 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578248 | orchestrator | 2026-04-06 05:14:48.578259 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:14:48.578269 | orchestrator | Monday 06 April 2026 05:14:40 +0000 (0:00:00.130) 0:07:10.013 ********** 2026-04-06 05:14:48.578280 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578291 | orchestrator | 2026-04-06 05:14:48.578301 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:14:48.578312 | orchestrator | Monday 06 April 2026 05:14:40 +0000 (0:00:00.129) 0:07:10.142 ********** 2026-04-06 05:14:48.578323 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.578334 | orchestrator | 2026-04-06 05:14:48.578344 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:14:48.578355 | orchestrator | Monday 06 April 2026 05:14:40 +0000 (0:00:00.142) 0:07:10.285 ********** 2026-04-06 05:14:48.578366 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.578377 | orchestrator | 2026-04-06 05:14:48.578387 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:14:48.578398 | orchestrator | Monday 06 April 2026 05:14:40 +0000 (0:00:00.141) 0:07:10.427 ********** 2026-04-06 05:14:48.578410 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.578421 | orchestrator | 2026-04-06 05:14:48.578431 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:14:48.578442 | orchestrator | Monday 06 April 2026 05:14:40 +0000 (0:00:00.230) 0:07:10.657 ********** 2026-04-06 05:14:48.578453 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578464 | orchestrator | 2026-04-06 05:14:48.578475 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:14:48.578485 | orchestrator | Monday 06 April 2026 05:14:41 +0000 (0:00:00.144) 0:07:10.802 ********** 2026-04-06 05:14:48.578496 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578507 | orchestrator | 2026-04-06 05:14:48.578518 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:14:48.578529 | orchestrator | Monday 06 April 2026 05:14:41 +0000 (0:00:00.120) 0:07:10.923 ********** 2026-04-06 05:14:48.578539 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578550 | orchestrator | 2026-04-06 05:14:48.578561 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:14:48.578571 | orchestrator | Monday 06 April 2026 05:14:41 +0000 (0:00:00.438) 0:07:11.361 ********** 2026-04-06 05:14:48.578582 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578593 | orchestrator | 2026-04-06 05:14:48.578603 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:14:48.578614 | orchestrator | Monday 06 April 2026 05:14:41 +0000 (0:00:00.115) 0:07:11.477 ********** 2026-04-06 05:14:48.578625 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578636 | orchestrator | 2026-04-06 05:14:48.578664 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:14:48.578676 | orchestrator | Monday 06 April 2026 05:14:41 +0000 (0:00:00.136) 0:07:11.614 ********** 2026-04-06 05:14:48.578687 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578698 | orchestrator | 2026-04-06 05:14:48.578709 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:14:48.578720 | orchestrator | Monday 06 April 2026 05:14:42 +0000 (0:00:00.132) 0:07:11.746 ********** 2026-04-06 05:14:48.578739 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578750 | orchestrator | 2026-04-06 05:14:48.578761 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:14:48.578773 | orchestrator | Monday 06 April 2026 05:14:42 +0000 (0:00:00.118) 0:07:11.864 ********** 2026-04-06 05:14:48.578784 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578795 | orchestrator | 2026-04-06 05:14:48.578805 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:14:48.578816 | orchestrator | Monday 06 April 2026 05:14:42 +0000 (0:00:00.125) 0:07:11.990 ********** 2026-04-06 05:14:48.578827 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578838 | orchestrator | 2026-04-06 05:14:48.578849 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:14:48.578860 | orchestrator | Monday 06 April 2026 05:14:42 +0000 (0:00:00.126) 0:07:12.116 ********** 2026-04-06 05:14:48.578870 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578886 | orchestrator | 2026-04-06 05:14:48.578903 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:14:48.578920 | orchestrator | Monday 06 April 2026 05:14:42 +0000 (0:00:00.115) 0:07:12.232 ********** 2026-04-06 05:14:48.578938 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.578956 | orchestrator | 2026-04-06 05:14:48.578998 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:14:48.579016 | orchestrator | Monday 06 April 2026 05:14:42 +0000 (0:00:00.134) 0:07:12.367 ********** 2026-04-06 05:14:48.579035 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.579054 | orchestrator | 2026-04-06 05:14:48.579073 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:14:48.579088 | orchestrator | Monday 06 April 2026 05:14:42 +0000 (0:00:00.201) 0:07:12.569 ********** 2026-04-06 05:14:48.579099 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.579110 | orchestrator | 2026-04-06 05:14:48.579121 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:14:48.579132 | orchestrator | Monday 06 April 2026 05:14:43 +0000 (0:00:00.925) 0:07:13.494 ********** 2026-04-06 05:14:48.579143 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.579154 | orchestrator | 2026-04-06 05:14:48.579164 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:14:48.579175 | orchestrator | Monday 06 April 2026 05:14:45 +0000 (0:00:01.354) 0:07:14.848 ********** 2026-04-06 05:14:48.579193 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-06 05:14:48.579205 | orchestrator | 2026-04-06 05:14:48.579216 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:14:48.579227 | orchestrator | Monday 06 April 2026 05:14:45 +0000 (0:00:00.518) 0:07:15.366 ********** 2026-04-06 05:14:48.579238 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.579249 | orchestrator | 2026-04-06 05:14:48.579259 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:14:48.579270 | orchestrator | Monday 06 April 2026 05:14:45 +0000 (0:00:00.148) 0:07:15.515 ********** 2026-04-06 05:14:48.579281 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.579292 | orchestrator | 2026-04-06 05:14:48.579303 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:14:48.579314 | orchestrator | Monday 06 April 2026 05:14:45 +0000 (0:00:00.141) 0:07:15.656 ********** 2026-04-06 05:14:48.579324 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:14:48.579335 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:14:48.579346 | orchestrator | 2026-04-06 05:14:48.579357 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:14:48.579368 | orchestrator | Monday 06 April 2026 05:14:46 +0000 (0:00:00.806) 0:07:16.463 ********** 2026-04-06 05:14:48.579378 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.579399 | orchestrator | 2026-04-06 05:14:48.579410 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:14:48.579421 | orchestrator | Monday 06 April 2026 05:14:47 +0000 (0:00:00.472) 0:07:16.935 ********** 2026-04-06 05:14:48.579431 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.579442 | orchestrator | 2026-04-06 05:14:48.579453 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:14:48.579464 | orchestrator | Monday 06 April 2026 05:14:47 +0000 (0:00:00.142) 0:07:17.078 ********** 2026-04-06 05:14:48.579475 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.579486 | orchestrator | 2026-04-06 05:14:48.579497 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:14:48.579508 | orchestrator | Monday 06 April 2026 05:14:47 +0000 (0:00:00.139) 0:07:17.217 ********** 2026-04-06 05:14:48.579519 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:14:48.579530 | orchestrator | 2026-04-06 05:14:48.579541 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:14:48.579551 | orchestrator | Monday 06 April 2026 05:14:47 +0000 (0:00:00.146) 0:07:17.363 ********** 2026-04-06 05:14:48.579562 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-06 05:14:48.579573 | orchestrator | 2026-04-06 05:14:48.579584 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:14:48.579595 | orchestrator | Monday 06 April 2026 05:14:47 +0000 (0:00:00.216) 0:07:17.579 ********** 2026-04-06 05:14:48.579606 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:14:48.579616 | orchestrator | 2026-04-06 05:14:48.579628 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:14:48.579648 | orchestrator | Monday 06 April 2026 05:14:48 +0000 (0:00:00.705) 0:07:18.285 ********** 2026-04-06 05:15:01.550183 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:15:01.550299 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:15:01.550317 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:15:01.550329 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550342 | orchestrator | 2026-04-06 05:15:01.550354 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:15:01.550366 | orchestrator | Monday 06 April 2026 05:14:48 +0000 (0:00:00.141) 0:07:18.427 ********** 2026-04-06 05:15:01.550377 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550389 | orchestrator | 2026-04-06 05:15:01.550400 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:15:01.550411 | orchestrator | Monday 06 April 2026 05:14:48 +0000 (0:00:00.145) 0:07:18.572 ********** 2026-04-06 05:15:01.550423 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550434 | orchestrator | 2026-04-06 05:15:01.550445 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:15:01.550456 | orchestrator | Monday 06 April 2026 05:14:49 +0000 (0:00:00.429) 0:07:19.002 ********** 2026-04-06 05:15:01.550466 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550477 | orchestrator | 2026-04-06 05:15:01.550488 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:15:01.550499 | orchestrator | Monday 06 April 2026 05:14:49 +0000 (0:00:00.167) 0:07:19.169 ********** 2026-04-06 05:15:01.550510 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550521 | orchestrator | 2026-04-06 05:15:01.550532 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:15:01.550543 | orchestrator | Monday 06 April 2026 05:14:49 +0000 (0:00:00.152) 0:07:19.322 ********** 2026-04-06 05:15:01.550554 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550565 | orchestrator | 2026-04-06 05:15:01.550576 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:15:01.550587 | orchestrator | Monday 06 April 2026 05:14:49 +0000 (0:00:00.154) 0:07:19.476 ********** 2026-04-06 05:15:01.550623 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:01.550635 | orchestrator | 2026-04-06 05:15:01.550646 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:15:01.550657 | orchestrator | Monday 06 April 2026 05:14:51 +0000 (0:00:01.514) 0:07:20.991 ********** 2026-04-06 05:15:01.550668 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:01.550679 | orchestrator | 2026-04-06 05:15:01.550690 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:15:01.550701 | orchestrator | Monday 06 April 2026 05:14:51 +0000 (0:00:00.148) 0:07:21.140 ********** 2026-04-06 05:15:01.550726 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-06 05:15:01.550738 | orchestrator | 2026-04-06 05:15:01.550749 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:15:01.550760 | orchestrator | Monday 06 April 2026 05:14:51 +0000 (0:00:00.228) 0:07:21.369 ********** 2026-04-06 05:15:01.550771 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550782 | orchestrator | 2026-04-06 05:15:01.550794 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:15:01.550805 | orchestrator | Monday 06 April 2026 05:14:51 +0000 (0:00:00.166) 0:07:21.536 ********** 2026-04-06 05:15:01.550816 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550827 | orchestrator | 2026-04-06 05:15:01.550838 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:15:01.550849 | orchestrator | Monday 06 April 2026 05:14:51 +0000 (0:00:00.148) 0:07:21.684 ********** 2026-04-06 05:15:01.550860 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550871 | orchestrator | 2026-04-06 05:15:01.550882 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:15:01.550893 | orchestrator | Monday 06 April 2026 05:14:52 +0000 (0:00:00.143) 0:07:21.828 ********** 2026-04-06 05:15:01.550904 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550915 | orchestrator | 2026-04-06 05:15:01.550926 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:15:01.550938 | orchestrator | Monday 06 April 2026 05:14:52 +0000 (0:00:00.157) 0:07:21.986 ********** 2026-04-06 05:15:01.550949 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.550981 | orchestrator | 2026-04-06 05:15:01.550992 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:15:01.551003 | orchestrator | Monday 06 April 2026 05:14:52 +0000 (0:00:00.159) 0:07:22.145 ********** 2026-04-06 05:15:01.551014 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551025 | orchestrator | 2026-04-06 05:15:01.551036 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:15:01.551046 | orchestrator | Monday 06 April 2026 05:14:52 +0000 (0:00:00.419) 0:07:22.565 ********** 2026-04-06 05:15:01.551057 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551068 | orchestrator | 2026-04-06 05:15:01.551079 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:15:01.551090 | orchestrator | Monday 06 April 2026 05:14:53 +0000 (0:00:00.167) 0:07:22.733 ********** 2026-04-06 05:15:01.551101 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551112 | orchestrator | 2026-04-06 05:15:01.551123 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:15:01.551134 | orchestrator | Monday 06 April 2026 05:14:53 +0000 (0:00:00.142) 0:07:22.876 ********** 2026-04-06 05:15:01.551145 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:01.551156 | orchestrator | 2026-04-06 05:15:01.551167 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:15:01.551177 | orchestrator | Monday 06 April 2026 05:14:53 +0000 (0:00:00.251) 0:07:23.127 ********** 2026-04-06 05:15:01.551188 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-06 05:15:01.551200 | orchestrator | 2026-04-06 05:15:01.551221 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:15:01.551249 | orchestrator | Monday 06 April 2026 05:14:53 +0000 (0:00:00.217) 0:07:23.344 ********** 2026-04-06 05:15:01.551261 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-06 05:15:01.551272 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-06 05:15:01.551283 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-06 05:15:01.551294 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-06 05:15:01.551305 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-06 05:15:01.551315 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-06 05:15:01.551326 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-06 05:15:01.551337 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:15:01.551348 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:15:01.551358 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:15:01.551369 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:15:01.551380 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:15:01.551391 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:15:01.551402 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:15:01.551413 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-06 05:15:01.551423 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-06 05:15:01.551434 | orchestrator | 2026-04-06 05:15:01.551445 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:15:01.551456 | orchestrator | Monday 06 April 2026 05:14:59 +0000 (0:00:05.578) 0:07:28.923 ********** 2026-04-06 05:15:01.551466 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551478 | orchestrator | 2026-04-06 05:15:01.551488 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:15:01.551499 | orchestrator | Monday 06 April 2026 05:14:59 +0000 (0:00:00.170) 0:07:29.093 ********** 2026-04-06 05:15:01.551510 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551521 | orchestrator | 2026-04-06 05:15:01.551531 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:15:01.551542 | orchestrator | Monday 06 April 2026 05:14:59 +0000 (0:00:00.126) 0:07:29.220 ********** 2026-04-06 05:15:01.551553 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551564 | orchestrator | 2026-04-06 05:15:01.551574 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:15:01.551591 | orchestrator | Monday 06 April 2026 05:14:59 +0000 (0:00:00.139) 0:07:29.359 ********** 2026-04-06 05:15:01.551602 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551612 | orchestrator | 2026-04-06 05:15:01.551623 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:15:01.551634 | orchestrator | Monday 06 April 2026 05:14:59 +0000 (0:00:00.117) 0:07:29.477 ********** 2026-04-06 05:15:01.551645 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551656 | orchestrator | 2026-04-06 05:15:01.551666 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:15:01.551677 | orchestrator | Monday 06 April 2026 05:14:59 +0000 (0:00:00.129) 0:07:29.606 ********** 2026-04-06 05:15:01.551688 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551699 | orchestrator | 2026-04-06 05:15:01.551710 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:15:01.551721 | orchestrator | Monday 06 April 2026 05:15:00 +0000 (0:00:00.460) 0:07:30.066 ********** 2026-04-06 05:15:01.551731 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551742 | orchestrator | 2026-04-06 05:15:01.551753 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:15:01.551773 | orchestrator | Monday 06 April 2026 05:15:00 +0000 (0:00:00.157) 0:07:30.223 ********** 2026-04-06 05:15:01.551784 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551795 | orchestrator | 2026-04-06 05:15:01.551806 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:15:01.551817 | orchestrator | Monday 06 April 2026 05:15:00 +0000 (0:00:00.126) 0:07:30.350 ********** 2026-04-06 05:15:01.551827 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551838 | orchestrator | 2026-04-06 05:15:01.551849 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:15:01.551860 | orchestrator | Monday 06 April 2026 05:15:00 +0000 (0:00:00.142) 0:07:30.493 ********** 2026-04-06 05:15:01.551870 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551881 | orchestrator | 2026-04-06 05:15:01.551892 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:15:01.551902 | orchestrator | Monday 06 April 2026 05:15:00 +0000 (0:00:00.127) 0:07:30.620 ********** 2026-04-06 05:15:01.551913 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.551924 | orchestrator | 2026-04-06 05:15:01.551935 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:15:01.551946 | orchestrator | Monday 06 April 2026 05:15:01 +0000 (0:00:00.126) 0:07:30.746 ********** 2026-04-06 05:15:01.551995 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.552007 | orchestrator | 2026-04-06 05:15:01.552018 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:15:01.552029 | orchestrator | Monday 06 April 2026 05:15:01 +0000 (0:00:00.141) 0:07:30.888 ********** 2026-04-06 05:15:01.552040 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.552050 | orchestrator | 2026-04-06 05:15:01.552061 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:15:01.552072 | orchestrator | Monday 06 April 2026 05:15:01 +0000 (0:00:00.230) 0:07:31.118 ********** 2026-04-06 05:15:01.552083 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:01.552094 | orchestrator | 2026-04-06 05:15:01.552105 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:15:01.552123 | orchestrator | Monday 06 April 2026 05:15:01 +0000 (0:00:00.137) 0:07:31.256 ********** 2026-04-06 05:15:19.585976 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586124 | orchestrator | 2026-04-06 05:15:19.586137 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:15:19.586147 | orchestrator | Monday 06 April 2026 05:15:01 +0000 (0:00:00.260) 0:07:31.517 ********** 2026-04-06 05:15:19.586154 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586161 | orchestrator | 2026-04-06 05:15:19.586168 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:15:19.586175 | orchestrator | Monday 06 April 2026 05:15:01 +0000 (0:00:00.147) 0:07:31.665 ********** 2026-04-06 05:15:19.586182 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586189 | orchestrator | 2026-04-06 05:15:19.586197 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:15:19.586205 | orchestrator | Monday 06 April 2026 05:15:02 +0000 (0:00:00.150) 0:07:31.815 ********** 2026-04-06 05:15:19.586213 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586219 | orchestrator | 2026-04-06 05:15:19.586226 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:15:19.586233 | orchestrator | Monday 06 April 2026 05:15:02 +0000 (0:00:00.166) 0:07:31.981 ********** 2026-04-06 05:15:19.586240 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586246 | orchestrator | 2026-04-06 05:15:19.586252 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:15:19.586258 | orchestrator | Monday 06 April 2026 05:15:02 +0000 (0:00:00.153) 0:07:32.135 ********** 2026-04-06 05:15:19.586265 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586294 | orchestrator | 2026-04-06 05:15:19.586301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:15:19.586307 | orchestrator | Monday 06 April 2026 05:15:02 +0000 (0:00:00.438) 0:07:32.574 ********** 2026-04-06 05:15:19.586314 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586321 | orchestrator | 2026-04-06 05:15:19.586328 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:15:19.586335 | orchestrator | Monday 06 April 2026 05:15:03 +0000 (0:00:00.157) 0:07:32.732 ********** 2026-04-06 05:15:19.586342 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-06 05:15:19.586349 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-06 05:15:19.586357 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-06 05:15:19.586363 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586370 | orchestrator | 2026-04-06 05:15:19.586376 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:15:19.586395 | orchestrator | Monday 06 April 2026 05:15:03 +0000 (0:00:00.466) 0:07:33.198 ********** 2026-04-06 05:15:19.586403 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-06 05:15:19.586410 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-06 05:15:19.586417 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-06 05:15:19.586424 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586430 | orchestrator | 2026-04-06 05:15:19.586436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:15:19.586443 | orchestrator | Monday 06 April 2026 05:15:03 +0000 (0:00:00.435) 0:07:33.634 ********** 2026-04-06 05:15:19.586450 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-06 05:15:19.586457 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-06 05:15:19.586463 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-06 05:15:19.586470 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586477 | orchestrator | 2026-04-06 05:15:19.586483 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:15:19.586490 | orchestrator | Monday 06 April 2026 05:15:04 +0000 (0:00:00.402) 0:07:34.037 ********** 2026-04-06 05:15:19.586497 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586504 | orchestrator | 2026-04-06 05:15:19.586511 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:15:19.586519 | orchestrator | Monday 06 April 2026 05:15:04 +0000 (0:00:00.145) 0:07:34.183 ********** 2026-04-06 05:15:19.586528 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-06 05:15:19.586535 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586542 | orchestrator | 2026-04-06 05:15:19.586549 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:15:19.586556 | orchestrator | Monday 06 April 2026 05:15:04 +0000 (0:00:00.372) 0:07:34.555 ********** 2026-04-06 05:15:19.586564 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:15:19.586571 | orchestrator | 2026-04-06 05:15:19.586579 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:15:19.586586 | orchestrator | Monday 06 April 2026 05:15:05 +0000 (0:00:00.835) 0:07:35.391 ********** 2026-04-06 05:15:19.586593 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.586601 | orchestrator | 2026-04-06 05:15:19.586608 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-06 05:15:19.586615 | orchestrator | Monday 06 April 2026 05:15:05 +0000 (0:00:00.152) 0:07:35.544 ********** 2026-04-06 05:15:19.586623 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-04-06 05:15:19.586631 | orchestrator | 2026-04-06 05:15:19.586638 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-06 05:15:19.586645 | orchestrator | Monday 06 April 2026 05:15:06 +0000 (0:00:00.251) 0:07:35.795 ********** 2026-04-06 05:15:19.586652 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.586667 | orchestrator | 2026-04-06 05:15:19.586674 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-06 05:15:19.586682 | orchestrator | Monday 06 April 2026 05:15:08 +0000 (0:00:02.791) 0:07:38.587 ********** 2026-04-06 05:15:19.586688 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.586696 | orchestrator | 2026-04-06 05:15:19.586704 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-06 05:15:19.586729 | orchestrator | Monday 06 April 2026 05:15:09 +0000 (0:00:00.176) 0:07:38.763 ********** 2026-04-06 05:15:19.586737 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.586744 | orchestrator | 2026-04-06 05:15:19.586752 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-06 05:15:19.586760 | orchestrator | Monday 06 April 2026 05:15:09 +0000 (0:00:00.160) 0:07:38.924 ********** 2026-04-06 05:15:19.586768 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.586776 | orchestrator | 2026-04-06 05:15:19.586782 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-06 05:15:19.586789 | orchestrator | Monday 06 April 2026 05:15:09 +0000 (0:00:00.163) 0:07:39.087 ********** 2026-04-06 05:15:19.586796 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:15:19.586804 | orchestrator | 2026-04-06 05:15:19.586811 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-06 05:15:19.586817 | orchestrator | Monday 06 April 2026 05:15:10 +0000 (0:00:01.040) 0:07:40.128 ********** 2026-04-06 05:15:19.586824 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.586830 | orchestrator | 2026-04-06 05:15:19.586836 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-06 05:15:19.586841 | orchestrator | Monday 06 April 2026 05:15:11 +0000 (0:00:00.617) 0:07:40.745 ********** 2026-04-06 05:15:19.586847 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.586854 | orchestrator | 2026-04-06 05:15:19.586860 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-06 05:15:19.586866 | orchestrator | Monday 06 April 2026 05:15:11 +0000 (0:00:00.488) 0:07:41.234 ********** 2026-04-06 05:15:19.586872 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.586878 | orchestrator | 2026-04-06 05:15:19.586885 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-06 05:15:19.586891 | orchestrator | Monday 06 April 2026 05:15:11 +0000 (0:00:00.474) 0:07:41.708 ********** 2026-04-06 05:15:19.586898 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:15:19.586904 | orchestrator | 2026-04-06 05:15:19.586910 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-06 05:15:19.586917 | orchestrator | Monday 06 April 2026 05:15:12 +0000 (0:00:00.553) 0:07:42.261 ********** 2026-04-06 05:15:19.586923 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:15:19.586930 | orchestrator | 2026-04-06 05:15:19.586936 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-06 05:15:19.586942 | orchestrator | Monday 06 April 2026 05:15:13 +0000 (0:00:00.580) 0:07:42.842 ********** 2026-04-06 05:15:19.586949 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:15:19.586985 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-06 05:15:19.586991 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-06 05:15:19.586997 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-06 05:15:19.587003 | orchestrator | 2026-04-06 05:15:19.587010 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-06 05:15:19.587014 | orchestrator | Monday 06 April 2026 05:15:15 +0000 (0:00:02.769) 0:07:45.612 ********** 2026-04-06 05:15:19.587017 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:15:19.587021 | orchestrator | 2026-04-06 05:15:19.587025 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-06 05:15:19.587029 | orchestrator | Monday 06 April 2026 05:15:16 +0000 (0:00:01.049) 0:07:46.661 ********** 2026-04-06 05:15:19.587039 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.587043 | orchestrator | 2026-04-06 05:15:19.587046 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-06 05:15:19.587050 | orchestrator | Monday 06 April 2026 05:15:17 +0000 (0:00:00.142) 0:07:46.804 ********** 2026-04-06 05:15:19.587054 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.587058 | orchestrator | 2026-04-06 05:15:19.587061 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-06 05:15:19.587065 | orchestrator | Monday 06 April 2026 05:15:17 +0000 (0:00:00.442) 0:07:47.247 ********** 2026-04-06 05:15:19.587069 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.587073 | orchestrator | 2026-04-06 05:15:19.587076 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-06 05:15:19.587080 | orchestrator | Monday 06 April 2026 05:15:18 +0000 (0:00:00.745) 0:07:47.993 ********** 2026-04-06 05:15:19.587084 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:15:19.587088 | orchestrator | 2026-04-06 05:15:19.587092 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-06 05:15:19.587096 | orchestrator | Monday 06 April 2026 05:15:18 +0000 (0:00:00.475) 0:07:48.468 ********** 2026-04-06 05:15:19.587099 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.587103 | orchestrator | 2026-04-06 05:15:19.587107 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-06 05:15:19.587111 | orchestrator | Monday 06 April 2026 05:15:18 +0000 (0:00:00.134) 0:07:48.602 ********** 2026-04-06 05:15:19.587114 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-04-06 05:15:19.587118 | orchestrator | 2026-04-06 05:15:19.587122 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-06 05:15:19.587126 | orchestrator | Monday 06 April 2026 05:15:19 +0000 (0:00:00.207) 0:07:48.810 ********** 2026-04-06 05:15:19.587130 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.587133 | orchestrator | 2026-04-06 05:15:19.587137 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-06 05:15:19.587141 | orchestrator | Monday 06 April 2026 05:15:19 +0000 (0:00:00.136) 0:07:48.946 ********** 2026-04-06 05:15:19.587145 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:15:19.587149 | orchestrator | 2026-04-06 05:15:19.587152 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-06 05:15:19.587156 | orchestrator | Monday 06 April 2026 05:15:19 +0000 (0:00:00.146) 0:07:49.092 ********** 2026-04-06 05:15:19.587160 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-04-06 05:15:19.587164 | orchestrator | 2026-04-06 05:15:19.587167 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-06 05:15:19.587177 | orchestrator | Monday 06 April 2026 05:15:19 +0000 (0:00:00.201) 0:07:49.294 ********** 2026-04-06 05:16:07.409646 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:16:07.409760 | orchestrator | 2026-04-06 05:16:07.409777 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-06 05:16:07.409790 | orchestrator | Monday 06 April 2026 05:15:20 +0000 (0:00:01.296) 0:07:50.591 ********** 2026-04-06 05:16:07.409801 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:16:07.409813 | orchestrator | 2026-04-06 05:16:07.409824 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-06 05:16:07.409836 | orchestrator | Monday 06 April 2026 05:15:21 +0000 (0:00:00.932) 0:07:51.523 ********** 2026-04-06 05:16:07.409848 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:16:07.409859 | orchestrator | 2026-04-06 05:16:07.409870 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-06 05:16:07.409881 | orchestrator | Monday 06 April 2026 05:15:23 +0000 (0:00:01.416) 0:07:52.940 ********** 2026-04-06 05:16:07.409892 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:16:07.409904 | orchestrator | 2026-04-06 05:16:07.409915 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-06 05:16:07.409926 | orchestrator | Monday 06 April 2026 05:15:25 +0000 (0:00:02.535) 0:07:55.476 ********** 2026-04-06 05:16:07.410009 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-04-06 05:16:07.410084 | orchestrator | 2026-04-06 05:16:07.410096 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-06 05:16:07.410107 | orchestrator | Monday 06 April 2026 05:15:25 +0000 (0:00:00.220) 0:07:55.697 ********** 2026-04-06 05:16:07.410118 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-06 05:16:07.410130 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:16:07.410140 | orchestrator | 2026-04-06 05:16:07.410151 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-06 05:16:07.410163 | orchestrator | Monday 06 April 2026 05:15:47 +0000 (0:00:21.833) 0:08:17.530 ********** 2026-04-06 05:16:07.410174 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:16:07.410187 | orchestrator | 2026-04-06 05:16:07.410200 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-06 05:16:07.410213 | orchestrator | Monday 06 April 2026 05:15:49 +0000 (0:00:01.980) 0:08:19.510 ********** 2026-04-06 05:16:07.410226 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:07.410239 | orchestrator | 2026-04-06 05:16:07.410252 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-06 05:16:07.410280 | orchestrator | Monday 06 April 2026 05:15:49 +0000 (0:00:00.129) 0:08:19.640 ********** 2026-04-06 05:16:07.410296 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-06 05:16:07.410322 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-06 05:16:07.410334 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-06 05:16:07.410345 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-06 05:16:07.410357 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-06 05:16:07.410369 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1e9077e28326f7e20726952fdb430170f94bc239'}])  2026-04-06 05:16:07.410383 | orchestrator | 2026-04-06 05:16:07.410412 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-06 05:16:07.410433 | orchestrator | Monday 06 April 2026 05:15:58 +0000 (0:00:08.950) 0:08:28.591 ********** 2026-04-06 05:16:07.410445 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:16:07.410456 | orchestrator | 2026-04-06 05:16:07.410467 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:16:07.410477 | orchestrator | Monday 06 April 2026 05:16:00 +0000 (0:00:01.438) 0:08:30.030 ********** 2026-04-06 05:16:07.410489 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:16:07.410499 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-06 05:16:07.410510 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-06 05:16:07.410521 | orchestrator | 2026-04-06 05:16:07.410532 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:16:07.410543 | orchestrator | Monday 06 April 2026 05:16:01 +0000 (0:00:01.175) 0:08:31.205 ********** 2026-04-06 05:16:07.410554 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 05:16:07.410565 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 05:16:07.410575 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 05:16:07.410586 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:07.410597 | orchestrator | 2026-04-06 05:16:07.410608 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-06 05:16:07.410619 | orchestrator | Monday 06 April 2026 05:16:01 +0000 (0:00:00.485) 0:08:31.691 ********** 2026-04-06 05:16:07.410629 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:07.410641 | orchestrator | 2026-04-06 05:16:07.410652 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-06 05:16:07.410663 | orchestrator | Monday 06 April 2026 05:16:02 +0000 (0:00:00.140) 0:08:31.831 ********** 2026-04-06 05:16:07.410673 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:16:07.410684 | orchestrator | 2026-04-06 05:16:07.410695 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 05:16:07.410706 | orchestrator | Monday 06 April 2026 05:16:04 +0000 (0:00:01.962) 0:08:33.794 ********** 2026-04-06 05:16:07.410716 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:07.410727 | orchestrator | 2026-04-06 05:16:07.410738 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-06 05:16:07.410754 | orchestrator | Monday 06 April 2026 05:16:04 +0000 (0:00:00.137) 0:08:33.931 ********** 2026-04-06 05:16:07.410765 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:07.410776 | orchestrator | 2026-04-06 05:16:07.410787 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-06 05:16:07.410798 | orchestrator | Monday 06 April 2026 05:16:04 +0000 (0:00:00.131) 0:08:34.063 ********** 2026-04-06 05:16:07.410809 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:07.410819 | orchestrator | 2026-04-06 05:16:07.410830 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-06 05:16:07.410841 | orchestrator | Monday 06 April 2026 05:16:04 +0000 (0:00:00.126) 0:08:34.190 ********** 2026-04-06 05:16:07.410852 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:07.410863 | orchestrator | 2026-04-06 05:16:07.410873 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-06 05:16:07.410884 | orchestrator | Monday 06 April 2026 05:16:04 +0000 (0:00:00.140) 0:08:34.331 ********** 2026-04-06 05:16:07.410895 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:07.410906 | orchestrator | 2026-04-06 05:16:07.410916 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-06 05:16:07.410927 | orchestrator | Monday 06 April 2026 05:16:04 +0000 (0:00:00.131) 0:08:34.463 ********** 2026-04-06 05:16:07.410938 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:07.410970 | orchestrator | 2026-04-06 05:16:07.410982 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-06 05:16:07.410999 | orchestrator | Monday 06 April 2026 05:16:04 +0000 (0:00:00.122) 0:08:34.585 ********** 2026-04-06 05:16:07.411010 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:07.411021 | orchestrator | 2026-04-06 05:16:07.411032 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-04-06 05:16:07.411043 | orchestrator | 2026-04-06 05:16:07.411054 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-04-06 05:16:07.411064 | orchestrator | Monday 06 April 2026 05:16:05 +0000 (0:00:00.770) 0:08:35.356 ********** 2026-04-06 05:16:07.411075 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:16:07.411086 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:16:07.411097 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:16:07.411108 | orchestrator | 2026-04-06 05:16:07.411119 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-06 05:16:07.411130 | orchestrator | 2026-04-06 05:16:07.411141 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-06 05:16:07.411151 | orchestrator | Monday 06 April 2026 05:16:06 +0000 (0:00:01.054) 0:08:36.411 ********** 2026-04-06 05:16:07.411162 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:07.411173 | orchestrator | 2026-04-06 05:16:07.411184 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:16:07.411195 | orchestrator | Monday 06 April 2026 05:16:06 +0000 (0:00:00.219) 0:08:36.631 ********** 2026-04-06 05:16:07.411205 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:07.411216 | orchestrator | 2026-04-06 05:16:07.411227 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:16:07.411238 | orchestrator | Monday 06 April 2026 05:16:07 +0000 (0:00:00.210) 0:08:36.841 ********** 2026-04-06 05:16:07.411249 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:07.411259 | orchestrator | 2026-04-06 05:16:07.411270 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:16:07.411281 | orchestrator | Monday 06 April 2026 05:16:07 +0000 (0:00:00.133) 0:08:36.975 ********** 2026-04-06 05:16:07.411292 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:07.411303 | orchestrator | 2026-04-06 05:16:07.411321 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:16:14.141896 | orchestrator | Monday 06 April 2026 05:16:07 +0000 (0:00:00.140) 0:08:37.115 ********** 2026-04-06 05:16:14.142159 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142185 | orchestrator | 2026-04-06 05:16:14.142202 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:16:14.142219 | orchestrator | Monday 06 April 2026 05:16:07 +0000 (0:00:00.137) 0:08:37.253 ********** 2026-04-06 05:16:14.142236 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142252 | orchestrator | 2026-04-06 05:16:14.142268 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:16:14.142285 | orchestrator | Monday 06 April 2026 05:16:07 +0000 (0:00:00.118) 0:08:37.372 ********** 2026-04-06 05:16:14.142300 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142314 | orchestrator | 2026-04-06 05:16:14.142331 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:16:14.142347 | orchestrator | Monday 06 April 2026 05:16:07 +0000 (0:00:00.148) 0:08:37.520 ********** 2026-04-06 05:16:14.142364 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142381 | orchestrator | 2026-04-06 05:16:14.142397 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:16:14.142414 | orchestrator | Monday 06 April 2026 05:16:07 +0000 (0:00:00.130) 0:08:37.651 ********** 2026-04-06 05:16:14.142432 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142449 | orchestrator | 2026-04-06 05:16:14.142468 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:16:14.142486 | orchestrator | Monday 06 April 2026 05:16:08 +0000 (0:00:00.160) 0:08:37.811 ********** 2026-04-06 05:16:14.142504 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142522 | orchestrator | 2026-04-06 05:16:14.142569 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:16:14.142588 | orchestrator | Monday 06 April 2026 05:16:08 +0000 (0:00:00.135) 0:08:37.947 ********** 2026-04-06 05:16:14.142605 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142623 | orchestrator | 2026-04-06 05:16:14.142640 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:16:14.142657 | orchestrator | Monday 06 April 2026 05:16:08 +0000 (0:00:00.421) 0:08:38.369 ********** 2026-04-06 05:16:14.142675 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142693 | orchestrator | 2026-04-06 05:16:14.142711 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:16:14.142728 | orchestrator | Monday 06 April 2026 05:16:08 +0000 (0:00:00.218) 0:08:38.587 ********** 2026-04-06 05:16:14.142763 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142780 | orchestrator | 2026-04-06 05:16:14.142798 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:16:14.142815 | orchestrator | Monday 06 April 2026 05:16:09 +0000 (0:00:00.140) 0:08:38.728 ********** 2026-04-06 05:16:14.142831 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142847 | orchestrator | 2026-04-06 05:16:14.142863 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:16:14.142880 | orchestrator | Monday 06 April 2026 05:16:09 +0000 (0:00:00.149) 0:08:38.877 ********** 2026-04-06 05:16:14.142896 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.142913 | orchestrator | 2026-04-06 05:16:14.142929 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:16:14.142972 | orchestrator | Monday 06 April 2026 05:16:09 +0000 (0:00:00.132) 0:08:39.010 ********** 2026-04-06 05:16:14.142990 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143007 | orchestrator | 2026-04-06 05:16:14.143023 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:16:14.143040 | orchestrator | Monday 06 April 2026 05:16:09 +0000 (0:00:00.143) 0:08:39.153 ********** 2026-04-06 05:16:14.143055 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143071 | orchestrator | 2026-04-06 05:16:14.143087 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:16:14.143104 | orchestrator | Monday 06 April 2026 05:16:09 +0000 (0:00:00.144) 0:08:39.298 ********** 2026-04-06 05:16:14.143121 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143137 | orchestrator | 2026-04-06 05:16:14.143152 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:16:14.143168 | orchestrator | Monday 06 April 2026 05:16:09 +0000 (0:00:00.152) 0:08:39.451 ********** 2026-04-06 05:16:14.143184 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143199 | orchestrator | 2026-04-06 05:16:14.143214 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:16:14.143246 | orchestrator | Monday 06 April 2026 05:16:09 +0000 (0:00:00.147) 0:08:39.599 ********** 2026-04-06 05:16:14.143261 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143276 | orchestrator | 2026-04-06 05:16:14.143292 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:16:14.143307 | orchestrator | Monday 06 April 2026 05:16:10 +0000 (0:00:00.129) 0:08:39.728 ********** 2026-04-06 05:16:14.143323 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143339 | orchestrator | 2026-04-06 05:16:14.143354 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:16:14.143370 | orchestrator | Monday 06 April 2026 05:16:10 +0000 (0:00:00.144) 0:08:39.873 ********** 2026-04-06 05:16:14.143386 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143400 | orchestrator | 2026-04-06 05:16:14.143414 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:16:14.143429 | orchestrator | Monday 06 April 2026 05:16:10 +0000 (0:00:00.131) 0:08:40.004 ********** 2026-04-06 05:16:14.143445 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143475 | orchestrator | 2026-04-06 05:16:14.143491 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:16:14.143506 | orchestrator | Monday 06 April 2026 05:16:10 +0000 (0:00:00.423) 0:08:40.427 ********** 2026-04-06 05:16:14.143551 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143568 | orchestrator | 2026-04-06 05:16:14.143582 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:16:14.143623 | orchestrator | Monday 06 April 2026 05:16:10 +0000 (0:00:00.212) 0:08:40.640 ********** 2026-04-06 05:16:14.143639 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143654 | orchestrator | 2026-04-06 05:16:14.143669 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:16:14.143686 | orchestrator | Monday 06 April 2026 05:16:11 +0000 (0:00:00.154) 0:08:40.794 ********** 2026-04-06 05:16:14.143702 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143717 | orchestrator | 2026-04-06 05:16:14.143732 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:16:14.143747 | orchestrator | Monday 06 April 2026 05:16:11 +0000 (0:00:00.158) 0:08:40.953 ********** 2026-04-06 05:16:14.143761 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143777 | orchestrator | 2026-04-06 05:16:14.143814 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:16:14.143831 | orchestrator | Monday 06 April 2026 05:16:11 +0000 (0:00:00.142) 0:08:41.096 ********** 2026-04-06 05:16:14.143847 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143863 | orchestrator | 2026-04-06 05:16:14.143878 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:16:14.143893 | orchestrator | Monday 06 April 2026 05:16:11 +0000 (0:00:00.131) 0:08:41.227 ********** 2026-04-06 05:16:14.143908 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.143923 | orchestrator | 2026-04-06 05:16:14.143938 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:16:14.143980 | orchestrator | Monday 06 April 2026 05:16:11 +0000 (0:00:00.150) 0:08:41.378 ********** 2026-04-06 05:16:14.143995 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144010 | orchestrator | 2026-04-06 05:16:14.144025 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:16:14.144040 | orchestrator | Monday 06 April 2026 05:16:11 +0000 (0:00:00.122) 0:08:41.500 ********** 2026-04-06 05:16:14.144057 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144072 | orchestrator | 2026-04-06 05:16:14.144086 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:16:14.144101 | orchestrator | Monday 06 April 2026 05:16:11 +0000 (0:00:00.161) 0:08:41.662 ********** 2026-04-06 05:16:14.144116 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144131 | orchestrator | 2026-04-06 05:16:14.144148 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:16:14.144163 | orchestrator | Monday 06 April 2026 05:16:12 +0000 (0:00:00.201) 0:08:41.863 ********** 2026-04-06 05:16:14.144178 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144192 | orchestrator | 2026-04-06 05:16:14.144221 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:16:14.144237 | orchestrator | Monday 06 April 2026 05:16:12 +0000 (0:00:00.134) 0:08:41.998 ********** 2026-04-06 05:16:14.144251 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144266 | orchestrator | 2026-04-06 05:16:14.144280 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:16:14.144295 | orchestrator | Monday 06 April 2026 05:16:12 +0000 (0:00:00.428) 0:08:42.426 ********** 2026-04-06 05:16:14.144311 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144327 | orchestrator | 2026-04-06 05:16:14.144342 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:16:14.144358 | orchestrator | Monday 06 April 2026 05:16:12 +0000 (0:00:00.132) 0:08:42.559 ********** 2026-04-06 05:16:14.144372 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144402 | orchestrator | 2026-04-06 05:16:14.144419 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:16:14.144435 | orchestrator | Monday 06 April 2026 05:16:12 +0000 (0:00:00.139) 0:08:42.698 ********** 2026-04-06 05:16:14.144451 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144467 | orchestrator | 2026-04-06 05:16:14.144483 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:16:14.144498 | orchestrator | Monday 06 April 2026 05:16:13 +0000 (0:00:00.159) 0:08:42.858 ********** 2026-04-06 05:16:14.144513 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144527 | orchestrator | 2026-04-06 05:16:14.144542 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:16:14.144558 | orchestrator | Monday 06 April 2026 05:16:13 +0000 (0:00:00.143) 0:08:43.001 ********** 2026-04-06 05:16:14.144573 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144589 | orchestrator | 2026-04-06 05:16:14.144604 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:16:14.144623 | orchestrator | Monday 06 April 2026 05:16:13 +0000 (0:00:00.145) 0:08:43.146 ********** 2026-04-06 05:16:14.144639 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144656 | orchestrator | 2026-04-06 05:16:14.144672 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:16:14.144689 | orchestrator | Monday 06 April 2026 05:16:13 +0000 (0:00:00.153) 0:08:43.300 ********** 2026-04-06 05:16:14.144705 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144723 | orchestrator | 2026-04-06 05:16:14.144738 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:16:14.144754 | orchestrator | Monday 06 April 2026 05:16:13 +0000 (0:00:00.145) 0:08:43.446 ********** 2026-04-06 05:16:14.144769 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144785 | orchestrator | 2026-04-06 05:16:14.144802 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:16:14.144819 | orchestrator | Monday 06 April 2026 05:16:13 +0000 (0:00:00.139) 0:08:43.585 ********** 2026-04-06 05:16:14.144836 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144851 | orchestrator | 2026-04-06 05:16:14.144867 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:16:14.144884 | orchestrator | Monday 06 April 2026 05:16:14 +0000 (0:00:00.137) 0:08:43.723 ********** 2026-04-06 05:16:14.144901 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:14.144917 | orchestrator | 2026-04-06 05:16:14.144933 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:16:14.145001 | orchestrator | Monday 06 April 2026 05:16:14 +0000 (0:00:00.131) 0:08:43.854 ********** 2026-04-06 05:16:22.695097 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695185 | orchestrator | 2026-04-06 05:16:22.695195 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:16:22.695203 | orchestrator | Monday 06 April 2026 05:16:14 +0000 (0:00:00.137) 0:08:43.991 ********** 2026-04-06 05:16:22.695208 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695214 | orchestrator | 2026-04-06 05:16:22.695220 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:16:22.695226 | orchestrator | Monday 06 April 2026 05:16:14 +0000 (0:00:00.252) 0:08:44.244 ********** 2026-04-06 05:16:22.695232 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695238 | orchestrator | 2026-04-06 05:16:22.695245 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:16:22.695251 | orchestrator | Monday 06 April 2026 05:16:14 +0000 (0:00:00.147) 0:08:44.391 ********** 2026-04-06 05:16:22.695257 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695263 | orchestrator | 2026-04-06 05:16:22.695269 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:16:22.695275 | orchestrator | Monday 06 April 2026 05:16:15 +0000 (0:00:00.879) 0:08:45.270 ********** 2026-04-06 05:16:22.695302 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695308 | orchestrator | 2026-04-06 05:16:22.695313 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:16:22.695319 | orchestrator | Monday 06 April 2026 05:16:15 +0000 (0:00:00.138) 0:08:45.409 ********** 2026-04-06 05:16:22.695324 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695330 | orchestrator | 2026-04-06 05:16:22.695336 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:16:22.695343 | orchestrator | Monday 06 April 2026 05:16:15 +0000 (0:00:00.138) 0:08:45.547 ********** 2026-04-06 05:16:22.695348 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695354 | orchestrator | 2026-04-06 05:16:22.695359 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:16:22.695365 | orchestrator | Monday 06 April 2026 05:16:15 +0000 (0:00:00.133) 0:08:45.681 ********** 2026-04-06 05:16:22.695371 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695376 | orchestrator | 2026-04-06 05:16:22.695381 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:16:22.695400 | orchestrator | Monday 06 April 2026 05:16:16 +0000 (0:00:00.145) 0:08:45.826 ********** 2026-04-06 05:16:22.695406 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695411 | orchestrator | 2026-04-06 05:16:22.695417 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:16:22.695422 | orchestrator | Monday 06 April 2026 05:16:16 +0000 (0:00:00.143) 0:08:45.970 ********** 2026-04-06 05:16:22.695427 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695433 | orchestrator | 2026-04-06 05:16:22.695439 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:16:22.695444 | orchestrator | Monday 06 April 2026 05:16:16 +0000 (0:00:00.153) 0:08:46.124 ********** 2026-04-06 05:16:22.695450 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-06 05:16:22.695455 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 05:16:22.695461 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 05:16:22.695466 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695472 | orchestrator | 2026-04-06 05:16:22.695477 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:16:22.695483 | orchestrator | Monday 06 April 2026 05:16:16 +0000 (0:00:00.406) 0:08:46.530 ********** 2026-04-06 05:16:22.695488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-06 05:16:22.695494 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 05:16:22.695499 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 05:16:22.695504 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695510 | orchestrator | 2026-04-06 05:16:22.695515 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:16:22.695521 | orchestrator | Monday 06 April 2026 05:16:17 +0000 (0:00:00.393) 0:08:46.923 ********** 2026-04-06 05:16:22.695526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-06 05:16:22.695531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 05:16:22.695537 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 05:16:22.695542 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695547 | orchestrator | 2026-04-06 05:16:22.695553 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:16:22.695559 | orchestrator | Monday 06 April 2026 05:16:17 +0000 (0:00:00.396) 0:08:47.320 ********** 2026-04-06 05:16:22.695564 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695570 | orchestrator | 2026-04-06 05:16:22.695575 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:16:22.695581 | orchestrator | Monday 06 April 2026 05:16:17 +0000 (0:00:00.118) 0:08:47.438 ********** 2026-04-06 05:16:22.695592 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-06 05:16:22.695598 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695603 | orchestrator | 2026-04-06 05:16:22.695609 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:16:22.695614 | orchestrator | Monday 06 April 2026 05:16:18 +0000 (0:00:00.327) 0:08:47.766 ********** 2026-04-06 05:16:22.695620 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695625 | orchestrator | 2026-04-06 05:16:22.695630 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:16:22.695636 | orchestrator | Monday 06 April 2026 05:16:18 +0000 (0:00:00.490) 0:08:48.256 ********** 2026-04-06 05:16:22.695641 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 05:16:22.695646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 05:16:22.695652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 05:16:22.695670 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695676 | orchestrator | 2026-04-06 05:16:22.695681 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-06 05:16:22.695687 | orchestrator | Monday 06 April 2026 05:16:18 +0000 (0:00:00.448) 0:08:48.705 ********** 2026-04-06 05:16:22.695692 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695697 | orchestrator | 2026-04-06 05:16:22.695703 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-06 05:16:22.695709 | orchestrator | Monday 06 April 2026 05:16:19 +0000 (0:00:00.121) 0:08:48.826 ********** 2026-04-06 05:16:22.695714 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695720 | orchestrator | 2026-04-06 05:16:22.695725 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-06 05:16:22.695731 | orchestrator | Monday 06 April 2026 05:16:19 +0000 (0:00:00.136) 0:08:48.963 ********** 2026-04-06 05:16:22.695736 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695742 | orchestrator | 2026-04-06 05:16:22.695747 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-06 05:16:22.695753 | orchestrator | Monday 06 April 2026 05:16:19 +0000 (0:00:00.137) 0:08:49.100 ********** 2026-04-06 05:16:22.695758 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:16:22.695764 | orchestrator | 2026-04-06 05:16:22.695769 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-06 05:16:22.695774 | orchestrator | 2026-04-06 05:16:22.695780 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-06 05:16:22.695785 | orchestrator | Monday 06 April 2026 05:16:19 +0000 (0:00:00.606) 0:08:49.707 ********** 2026-04-06 05:16:22.695791 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.695796 | orchestrator | 2026-04-06 05:16:22.695802 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:16:22.695807 | orchestrator | Monday 06 April 2026 05:16:20 +0000 (0:00:00.219) 0:08:49.926 ********** 2026-04-06 05:16:22.695813 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.695818 | orchestrator | 2026-04-06 05:16:22.695823 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:16:22.695829 | orchestrator | Monday 06 April 2026 05:16:20 +0000 (0:00:00.271) 0:08:50.198 ********** 2026-04-06 05:16:22.695834 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.695840 | orchestrator | 2026-04-06 05:16:22.695845 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:16:22.695854 | orchestrator | Monday 06 April 2026 05:16:20 +0000 (0:00:00.150) 0:08:50.348 ********** 2026-04-06 05:16:22.695860 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.695865 | orchestrator | 2026-04-06 05:16:22.695871 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:16:22.695876 | orchestrator | Monday 06 April 2026 05:16:21 +0000 (0:00:00.432) 0:08:50.781 ********** 2026-04-06 05:16:22.695881 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.695891 | orchestrator | 2026-04-06 05:16:22.695896 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:16:22.695902 | orchestrator | Monday 06 April 2026 05:16:21 +0000 (0:00:00.146) 0:08:50.927 ********** 2026-04-06 05:16:22.695907 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.695912 | orchestrator | 2026-04-06 05:16:22.695918 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:16:22.695923 | orchestrator | Monday 06 April 2026 05:16:21 +0000 (0:00:00.139) 0:08:51.067 ********** 2026-04-06 05:16:22.695929 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.695934 | orchestrator | 2026-04-06 05:16:22.695940 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:16:22.695981 | orchestrator | Monday 06 April 2026 05:16:21 +0000 (0:00:00.150) 0:08:51.217 ********** 2026-04-06 05:16:22.695987 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.695993 | orchestrator | 2026-04-06 05:16:22.695998 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:16:22.696003 | orchestrator | Monday 06 April 2026 05:16:21 +0000 (0:00:00.134) 0:08:51.352 ********** 2026-04-06 05:16:22.696009 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.696014 | orchestrator | 2026-04-06 05:16:22.696020 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:16:22.696025 | orchestrator | Monday 06 April 2026 05:16:21 +0000 (0:00:00.140) 0:08:51.492 ********** 2026-04-06 05:16:22.696031 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.696036 | orchestrator | 2026-04-06 05:16:22.696041 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:16:22.696047 | orchestrator | Monday 06 April 2026 05:16:21 +0000 (0:00:00.129) 0:08:51.622 ********** 2026-04-06 05:16:22.696052 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.696057 | orchestrator | 2026-04-06 05:16:22.696063 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:16:22.696068 | orchestrator | Monday 06 April 2026 05:16:22 +0000 (0:00:00.137) 0:08:51.759 ********** 2026-04-06 05:16:22.696073 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.696079 | orchestrator | 2026-04-06 05:16:22.696084 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:16:22.696090 | orchestrator | Monday 06 April 2026 05:16:22 +0000 (0:00:00.216) 0:08:51.976 ********** 2026-04-06 05:16:22.696095 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.696100 | orchestrator | 2026-04-06 05:16:22.696106 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:16:22.696111 | orchestrator | Monday 06 April 2026 05:16:22 +0000 (0:00:00.147) 0:08:52.124 ********** 2026-04-06 05:16:22.696116 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.696122 | orchestrator | 2026-04-06 05:16:22.696128 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:16:22.696133 | orchestrator | Monday 06 April 2026 05:16:22 +0000 (0:00:00.147) 0:08:52.272 ********** 2026-04-06 05:16:22.696139 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:22.696144 | orchestrator | 2026-04-06 05:16:22.696149 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:16:22.696160 | orchestrator | Monday 06 April 2026 05:16:22 +0000 (0:00:00.135) 0:08:52.407 ********** 2026-04-06 05:16:30.261585 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.261706 | orchestrator | 2026-04-06 05:16:30.261725 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:16:30.261738 | orchestrator | Monday 06 April 2026 05:16:22 +0000 (0:00:00.137) 0:08:52.544 ********** 2026-04-06 05:16:30.261749 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.261761 | orchestrator | 2026-04-06 05:16:30.261772 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:16:30.261783 | orchestrator | Monday 06 April 2026 05:16:23 +0000 (0:00:00.464) 0:08:53.009 ********** 2026-04-06 05:16:30.261793 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.261829 | orchestrator | 2026-04-06 05:16:30.261841 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:16:30.261852 | orchestrator | Monday 06 April 2026 05:16:23 +0000 (0:00:00.169) 0:08:53.178 ********** 2026-04-06 05:16:30.261863 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.261874 | orchestrator | 2026-04-06 05:16:30.261884 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:16:30.261896 | orchestrator | Monday 06 April 2026 05:16:23 +0000 (0:00:00.137) 0:08:53.315 ********** 2026-04-06 05:16:30.261906 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.261917 | orchestrator | 2026-04-06 05:16:30.261928 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:16:30.261939 | orchestrator | Monday 06 April 2026 05:16:23 +0000 (0:00:00.164) 0:08:53.480 ********** 2026-04-06 05:16:30.262009 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262092 | orchestrator | 2026-04-06 05:16:30.262114 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:16:30.262136 | orchestrator | Monday 06 April 2026 05:16:23 +0000 (0:00:00.153) 0:08:53.634 ********** 2026-04-06 05:16:30.262158 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262171 | orchestrator | 2026-04-06 05:16:30.262185 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:16:30.262197 | orchestrator | Monday 06 April 2026 05:16:24 +0000 (0:00:00.160) 0:08:53.795 ********** 2026-04-06 05:16:30.262211 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262223 | orchestrator | 2026-04-06 05:16:30.262236 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:16:30.262249 | orchestrator | Monday 06 April 2026 05:16:24 +0000 (0:00:00.131) 0:08:53.927 ********** 2026-04-06 05:16:30.262276 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262289 | orchestrator | 2026-04-06 05:16:30.262302 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:16:30.262314 | orchestrator | Monday 06 April 2026 05:16:24 +0000 (0:00:00.235) 0:08:54.163 ********** 2026-04-06 05:16:30.262327 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262339 | orchestrator | 2026-04-06 05:16:30.262352 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:16:30.262366 | orchestrator | Monday 06 April 2026 05:16:24 +0000 (0:00:00.157) 0:08:54.321 ********** 2026-04-06 05:16:30.262378 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262391 | orchestrator | 2026-04-06 05:16:30.262404 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:16:30.262417 | orchestrator | Monday 06 April 2026 05:16:24 +0000 (0:00:00.141) 0:08:54.463 ********** 2026-04-06 05:16:30.262429 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262442 | orchestrator | 2026-04-06 05:16:30.262455 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:16:30.262468 | orchestrator | Monday 06 April 2026 05:16:24 +0000 (0:00:00.128) 0:08:54.591 ********** 2026-04-06 05:16:30.262480 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262491 | orchestrator | 2026-04-06 05:16:30.262502 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:16:30.262513 | orchestrator | Monday 06 April 2026 05:16:25 +0000 (0:00:00.131) 0:08:54.722 ********** 2026-04-06 05:16:30.262524 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262535 | orchestrator | 2026-04-06 05:16:30.262546 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:16:30.262557 | orchestrator | Monday 06 April 2026 05:16:25 +0000 (0:00:00.470) 0:08:55.192 ********** 2026-04-06 05:16:30.262568 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262579 | orchestrator | 2026-04-06 05:16:30.262590 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:16:30.262601 | orchestrator | Monday 06 April 2026 05:16:25 +0000 (0:00:00.165) 0:08:55.358 ********** 2026-04-06 05:16:30.262624 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262635 | orchestrator | 2026-04-06 05:16:30.262646 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:16:30.262657 | orchestrator | Monday 06 April 2026 05:16:25 +0000 (0:00:00.175) 0:08:55.534 ********** 2026-04-06 05:16:30.262668 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262679 | orchestrator | 2026-04-06 05:16:30.262690 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:16:30.262701 | orchestrator | Monday 06 April 2026 05:16:26 +0000 (0:00:00.224) 0:08:55.758 ********** 2026-04-06 05:16:30.262712 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262723 | orchestrator | 2026-04-06 05:16:30.262734 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:16:30.262745 | orchestrator | Monday 06 April 2026 05:16:26 +0000 (0:00:00.135) 0:08:55.894 ********** 2026-04-06 05:16:30.262757 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262768 | orchestrator | 2026-04-06 05:16:30.262778 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:16:30.262789 | orchestrator | Monday 06 April 2026 05:16:26 +0000 (0:00:00.148) 0:08:56.043 ********** 2026-04-06 05:16:30.262800 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262811 | orchestrator | 2026-04-06 05:16:30.262822 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:16:30.262833 | orchestrator | Monday 06 April 2026 05:16:26 +0000 (0:00:00.147) 0:08:56.190 ********** 2026-04-06 05:16:30.262844 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262855 | orchestrator | 2026-04-06 05:16:30.262886 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:16:30.262898 | orchestrator | Monday 06 April 2026 05:16:26 +0000 (0:00:00.141) 0:08:56.332 ********** 2026-04-06 05:16:30.262909 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262920 | orchestrator | 2026-04-06 05:16:30.262930 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:16:30.262941 | orchestrator | Monday 06 April 2026 05:16:26 +0000 (0:00:00.135) 0:08:56.467 ********** 2026-04-06 05:16:30.262971 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.262983 | orchestrator | 2026-04-06 05:16:30.262993 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:16:30.263004 | orchestrator | Monday 06 April 2026 05:16:26 +0000 (0:00:00.139) 0:08:56.606 ********** 2026-04-06 05:16:30.263015 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263026 | orchestrator | 2026-04-06 05:16:30.263036 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:16:30.263048 | orchestrator | Monday 06 April 2026 05:16:27 +0000 (0:00:00.135) 0:08:56.742 ********** 2026-04-06 05:16:30.263059 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263069 | orchestrator | 2026-04-06 05:16:30.263080 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:16:30.263091 | orchestrator | Monday 06 April 2026 05:16:27 +0000 (0:00:00.128) 0:08:56.871 ********** 2026-04-06 05:16:30.263102 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263113 | orchestrator | 2026-04-06 05:16:30.263124 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:16:30.263135 | orchestrator | Monday 06 April 2026 05:16:27 +0000 (0:00:00.447) 0:08:57.318 ********** 2026-04-06 05:16:30.263145 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263156 | orchestrator | 2026-04-06 05:16:30.263167 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:16:30.263178 | orchestrator | Monday 06 April 2026 05:16:27 +0000 (0:00:00.141) 0:08:57.459 ********** 2026-04-06 05:16:30.263189 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263199 | orchestrator | 2026-04-06 05:16:30.263210 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:16:30.263232 | orchestrator | Monday 06 April 2026 05:16:27 +0000 (0:00:00.133) 0:08:57.593 ********** 2026-04-06 05:16:30.263248 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263260 | orchestrator | 2026-04-06 05:16:30.263270 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:16:30.263281 | orchestrator | Monday 06 April 2026 05:16:28 +0000 (0:00:00.146) 0:08:57.740 ********** 2026-04-06 05:16:30.263292 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263303 | orchestrator | 2026-04-06 05:16:30.263314 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:16:30.263325 | orchestrator | Monday 06 April 2026 05:16:28 +0000 (0:00:00.135) 0:08:57.875 ********** 2026-04-06 05:16:30.263335 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263346 | orchestrator | 2026-04-06 05:16:30.263357 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:16:30.263368 | orchestrator | Monday 06 April 2026 05:16:28 +0000 (0:00:00.262) 0:08:58.138 ********** 2026-04-06 05:16:30.263379 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263390 | orchestrator | 2026-04-06 05:16:30.263400 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:16:30.263411 | orchestrator | Monday 06 April 2026 05:16:28 +0000 (0:00:00.147) 0:08:58.285 ********** 2026-04-06 05:16:30.263422 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263433 | orchestrator | 2026-04-06 05:16:30.263444 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:16:30.263454 | orchestrator | Monday 06 April 2026 05:16:28 +0000 (0:00:00.226) 0:08:58.512 ********** 2026-04-06 05:16:30.263465 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263476 | orchestrator | 2026-04-06 05:16:30.263487 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:16:30.263497 | orchestrator | Monday 06 April 2026 05:16:28 +0000 (0:00:00.142) 0:08:58.654 ********** 2026-04-06 05:16:30.263508 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263519 | orchestrator | 2026-04-06 05:16:30.263530 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:16:30.263542 | orchestrator | Monday 06 April 2026 05:16:29 +0000 (0:00:00.134) 0:08:58.789 ********** 2026-04-06 05:16:30.263553 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263564 | orchestrator | 2026-04-06 05:16:30.263575 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:16:30.263585 | orchestrator | Monday 06 April 2026 05:16:29 +0000 (0:00:00.158) 0:08:58.947 ********** 2026-04-06 05:16:30.263596 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263607 | orchestrator | 2026-04-06 05:16:30.263618 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:16:30.263629 | orchestrator | Monday 06 April 2026 05:16:29 +0000 (0:00:00.147) 0:08:59.095 ********** 2026-04-06 05:16:30.263639 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263650 | orchestrator | 2026-04-06 05:16:30.263661 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:16:30.263672 | orchestrator | Monday 06 April 2026 05:16:29 +0000 (0:00:00.139) 0:08:59.234 ********** 2026-04-06 05:16:30.263683 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:30.263694 | orchestrator | 2026-04-06 05:16:30.263704 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:16:30.263715 | orchestrator | Monday 06 April 2026 05:16:29 +0000 (0:00:00.413) 0:08:59.648 ********** 2026-04-06 05:16:30.263726 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-06 05:16:30.263737 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-06 05:16:30.263755 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-06 05:16:38.335560 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.335670 | orchestrator | 2026-04-06 05:16:38.335685 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:16:38.335725 | orchestrator | Monday 06 April 2026 05:16:30 +0000 (0:00:00.438) 0:09:00.087 ********** 2026-04-06 05:16:38.335737 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-06 05:16:38.335749 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-06 05:16:38.335759 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-06 05:16:38.335770 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.335781 | orchestrator | 2026-04-06 05:16:38.335793 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:16:38.335804 | orchestrator | Monday 06 April 2026 05:16:30 +0000 (0:00:00.449) 0:09:00.537 ********** 2026-04-06 05:16:38.335815 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-06 05:16:38.335826 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-06 05:16:38.335836 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-06 05:16:38.335847 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.335858 | orchestrator | 2026-04-06 05:16:38.335869 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:16:38.335880 | orchestrator | Monday 06 April 2026 05:16:31 +0000 (0:00:00.422) 0:09:00.960 ********** 2026-04-06 05:16:38.335891 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.335996 | orchestrator | 2026-04-06 05:16:38.336009 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:16:38.336020 | orchestrator | Monday 06 April 2026 05:16:31 +0000 (0:00:00.149) 0:09:01.110 ********** 2026-04-06 05:16:38.336032 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-06 05:16:38.336043 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.336054 | orchestrator | 2026-04-06 05:16:38.336065 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:16:38.336076 | orchestrator | Monday 06 April 2026 05:16:31 +0000 (0:00:00.335) 0:09:01.445 ********** 2026-04-06 05:16:38.336090 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.336102 | orchestrator | 2026-04-06 05:16:38.336115 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:16:38.336128 | orchestrator | Monday 06 April 2026 05:16:31 +0000 (0:00:00.193) 0:09:01.638 ********** 2026-04-06 05:16:38.336155 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 05:16:38.336169 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 05:16:38.336182 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 05:16:38.336194 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.336206 | orchestrator | 2026-04-06 05:16:38.336219 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-06 05:16:38.336233 | orchestrator | Monday 06 April 2026 05:16:32 +0000 (0:00:00.471) 0:09:02.110 ********** 2026-04-06 05:16:38.336245 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.336258 | orchestrator | 2026-04-06 05:16:38.336270 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-06 05:16:38.336283 | orchestrator | Monday 06 April 2026 05:16:32 +0000 (0:00:00.138) 0:09:02.249 ********** 2026-04-06 05:16:38.336296 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.336308 | orchestrator | 2026-04-06 05:16:38.336321 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-06 05:16:38.336334 | orchestrator | Monday 06 April 2026 05:16:32 +0000 (0:00:00.119) 0:09:02.368 ********** 2026-04-06 05:16:38.336346 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.336359 | orchestrator | 2026-04-06 05:16:38.336372 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-06 05:16:38.336386 | orchestrator | Monday 06 April 2026 05:16:32 +0000 (0:00:00.136) 0:09:02.504 ********** 2026-04-06 05:16:38.336399 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:16:38.336412 | orchestrator | 2026-04-06 05:16:38.336425 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-06 05:16:38.336446 | orchestrator | 2026-04-06 05:16:38.336458 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-06 05:16:38.336469 | orchestrator | Monday 06 April 2026 05:16:33 +0000 (0:00:00.966) 0:09:03.471 ********** 2026-04-06 05:16:38.336480 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336491 | orchestrator | 2026-04-06 05:16:38.336502 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:16:38.336513 | orchestrator | Monday 06 April 2026 05:16:33 +0000 (0:00:00.230) 0:09:03.701 ********** 2026-04-06 05:16:38.336524 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336535 | orchestrator | 2026-04-06 05:16:38.336546 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:16:38.336557 | orchestrator | Monday 06 April 2026 05:16:34 +0000 (0:00:00.213) 0:09:03.914 ********** 2026-04-06 05:16:38.336568 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336579 | orchestrator | 2026-04-06 05:16:38.336590 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:16:38.336601 | orchestrator | Monday 06 April 2026 05:16:34 +0000 (0:00:00.136) 0:09:04.051 ********** 2026-04-06 05:16:38.336612 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336623 | orchestrator | 2026-04-06 05:16:38.336635 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:16:38.336646 | orchestrator | Monday 06 April 2026 05:16:34 +0000 (0:00:00.147) 0:09:04.199 ********** 2026-04-06 05:16:38.336657 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336668 | orchestrator | 2026-04-06 05:16:38.336679 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:16:38.336690 | orchestrator | Monday 06 April 2026 05:16:34 +0000 (0:00:00.129) 0:09:04.328 ********** 2026-04-06 05:16:38.336701 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336712 | orchestrator | 2026-04-06 05:16:38.336723 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:16:38.336751 | orchestrator | Monday 06 April 2026 05:16:34 +0000 (0:00:00.145) 0:09:04.474 ********** 2026-04-06 05:16:38.336762 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336774 | orchestrator | 2026-04-06 05:16:38.336785 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:16:38.336796 | orchestrator | Monday 06 April 2026 05:16:34 +0000 (0:00:00.138) 0:09:04.613 ********** 2026-04-06 05:16:38.336807 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336818 | orchestrator | 2026-04-06 05:16:38.336828 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:16:38.336839 | orchestrator | Monday 06 April 2026 05:16:35 +0000 (0:00:00.141) 0:09:04.754 ********** 2026-04-06 05:16:38.336850 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336861 | orchestrator | 2026-04-06 05:16:38.336872 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:16:38.336883 | orchestrator | Monday 06 April 2026 05:16:35 +0000 (0:00:00.132) 0:09:04.886 ********** 2026-04-06 05:16:38.336894 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336905 | orchestrator | 2026-04-06 05:16:38.336916 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:16:38.336927 | orchestrator | Monday 06 April 2026 05:16:35 +0000 (0:00:00.130) 0:09:05.017 ********** 2026-04-06 05:16:38.336938 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.336981 | orchestrator | 2026-04-06 05:16:38.336993 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:16:38.337004 | orchestrator | Monday 06 April 2026 05:16:35 +0000 (0:00:00.414) 0:09:05.431 ********** 2026-04-06 05:16:38.337015 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337026 | orchestrator | 2026-04-06 05:16:38.337037 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:16:38.337048 | orchestrator | Monday 06 April 2026 05:16:35 +0000 (0:00:00.211) 0:09:05.643 ********** 2026-04-06 05:16:38.337068 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337079 | orchestrator | 2026-04-06 05:16:38.337089 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:16:38.337100 | orchestrator | Monday 06 April 2026 05:16:36 +0000 (0:00:00.135) 0:09:05.779 ********** 2026-04-06 05:16:38.337111 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337122 | orchestrator | 2026-04-06 05:16:38.337133 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:16:38.337143 | orchestrator | Monday 06 April 2026 05:16:36 +0000 (0:00:00.139) 0:09:05.918 ********** 2026-04-06 05:16:38.337160 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337171 | orchestrator | 2026-04-06 05:16:38.337182 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:16:38.337193 | orchestrator | Monday 06 April 2026 05:16:36 +0000 (0:00:00.148) 0:09:06.067 ********** 2026-04-06 05:16:38.337204 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337214 | orchestrator | 2026-04-06 05:16:38.337225 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:16:38.337236 | orchestrator | Monday 06 April 2026 05:16:36 +0000 (0:00:00.159) 0:09:06.227 ********** 2026-04-06 05:16:38.337247 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337258 | orchestrator | 2026-04-06 05:16:38.337268 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:16:38.337279 | orchestrator | Monday 06 April 2026 05:16:36 +0000 (0:00:00.118) 0:09:06.345 ********** 2026-04-06 05:16:38.337290 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337301 | orchestrator | 2026-04-06 05:16:38.337312 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:16:38.337322 | orchestrator | Monday 06 April 2026 05:16:36 +0000 (0:00:00.137) 0:09:06.482 ********** 2026-04-06 05:16:38.337333 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337344 | orchestrator | 2026-04-06 05:16:38.337355 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:16:38.337366 | orchestrator | Monday 06 April 2026 05:16:36 +0000 (0:00:00.134) 0:09:06.617 ********** 2026-04-06 05:16:38.337377 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337388 | orchestrator | 2026-04-06 05:16:38.337399 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:16:38.337410 | orchestrator | Monday 06 April 2026 05:16:37 +0000 (0:00:00.136) 0:09:06.754 ********** 2026-04-06 05:16:38.337421 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337432 | orchestrator | 2026-04-06 05:16:38.337442 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:16:38.337453 | orchestrator | Monday 06 April 2026 05:16:37 +0000 (0:00:00.125) 0:09:06.880 ********** 2026-04-06 05:16:38.337464 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337475 | orchestrator | 2026-04-06 05:16:38.337485 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:16:38.337496 | orchestrator | Monday 06 April 2026 05:16:37 +0000 (0:00:00.134) 0:09:07.015 ********** 2026-04-06 05:16:38.337507 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337518 | orchestrator | 2026-04-06 05:16:38.337529 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:16:38.337540 | orchestrator | Monday 06 April 2026 05:16:37 +0000 (0:00:00.440) 0:09:07.456 ********** 2026-04-06 05:16:38.337551 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337562 | orchestrator | 2026-04-06 05:16:38.337572 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:16:38.337583 | orchestrator | Monday 06 April 2026 05:16:37 +0000 (0:00:00.249) 0:09:07.705 ********** 2026-04-06 05:16:38.337594 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337605 | orchestrator | 2026-04-06 05:16:38.337616 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:16:38.337626 | orchestrator | Monday 06 April 2026 05:16:38 +0000 (0:00:00.142) 0:09:07.847 ********** 2026-04-06 05:16:38.337644 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:38.337655 | orchestrator | 2026-04-06 05:16:38.337665 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:16:38.337676 | orchestrator | Monday 06 April 2026 05:16:38 +0000 (0:00:00.142) 0:09:07.990 ********** 2026-04-06 05:16:38.337693 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.356355 | orchestrator | 2026-04-06 05:16:46.356497 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:16:46.356526 | orchestrator | Monday 06 April 2026 05:16:38 +0000 (0:00:00.151) 0:09:08.141 ********** 2026-04-06 05:16:46.356546 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.356568 | orchestrator | 2026-04-06 05:16:46.356588 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:16:46.356605 | orchestrator | Monday 06 April 2026 05:16:38 +0000 (0:00:00.150) 0:09:08.292 ********** 2026-04-06 05:16:46.356623 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.356641 | orchestrator | 2026-04-06 05:16:46.356658 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:16:46.356676 | orchestrator | Monday 06 April 2026 05:16:38 +0000 (0:00:00.128) 0:09:08.420 ********** 2026-04-06 05:16:46.356696 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.356715 | orchestrator | 2026-04-06 05:16:46.356732 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:16:46.356752 | orchestrator | Monday 06 April 2026 05:16:38 +0000 (0:00:00.142) 0:09:08.563 ********** 2026-04-06 05:16:46.356769 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.356787 | orchestrator | 2026-04-06 05:16:46.356805 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:16:46.356823 | orchestrator | Monday 06 April 2026 05:16:38 +0000 (0:00:00.132) 0:09:08.696 ********** 2026-04-06 05:16:46.356841 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.356861 | orchestrator | 2026-04-06 05:16:46.356882 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:16:46.356901 | orchestrator | Monday 06 April 2026 05:16:39 +0000 (0:00:00.212) 0:09:08.908 ********** 2026-04-06 05:16:46.356920 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.356938 | orchestrator | 2026-04-06 05:16:46.357018 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:16:46.357035 | orchestrator | Monday 06 April 2026 05:16:39 +0000 (0:00:00.145) 0:09:09.053 ********** 2026-04-06 05:16:46.357048 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357061 | orchestrator | 2026-04-06 05:16:46.357073 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:16:46.357086 | orchestrator | Monday 06 April 2026 05:16:39 +0000 (0:00:00.143) 0:09:09.197 ********** 2026-04-06 05:16:46.357099 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357112 | orchestrator | 2026-04-06 05:16:46.357142 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:16:46.357156 | orchestrator | Monday 06 April 2026 05:16:39 +0000 (0:00:00.438) 0:09:09.636 ********** 2026-04-06 05:16:46.357170 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357182 | orchestrator | 2026-04-06 05:16:46.357196 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:16:46.357209 | orchestrator | Monday 06 April 2026 05:16:40 +0000 (0:00:00.138) 0:09:09.774 ********** 2026-04-06 05:16:46.357219 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357237 | orchestrator | 2026-04-06 05:16:46.357263 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:16:46.357285 | orchestrator | Monday 06 April 2026 05:16:40 +0000 (0:00:00.126) 0:09:09.901 ********** 2026-04-06 05:16:46.357302 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357319 | orchestrator | 2026-04-06 05:16:46.357336 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:16:46.357353 | orchestrator | Monday 06 April 2026 05:16:40 +0000 (0:00:00.149) 0:09:10.050 ********** 2026-04-06 05:16:46.357406 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357425 | orchestrator | 2026-04-06 05:16:46.357445 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:16:46.357459 | orchestrator | Monday 06 April 2026 05:16:40 +0000 (0:00:00.137) 0:09:10.188 ********** 2026-04-06 05:16:46.357470 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357481 | orchestrator | 2026-04-06 05:16:46.357492 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:16:46.357503 | orchestrator | Monday 06 April 2026 05:16:40 +0000 (0:00:00.140) 0:09:10.328 ********** 2026-04-06 05:16:46.357514 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357525 | orchestrator | 2026-04-06 05:16:46.357536 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:16:46.357547 | orchestrator | Monday 06 April 2026 05:16:40 +0000 (0:00:00.144) 0:09:10.473 ********** 2026-04-06 05:16:46.357558 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357568 | orchestrator | 2026-04-06 05:16:46.357580 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:16:46.357590 | orchestrator | Monday 06 April 2026 05:16:40 +0000 (0:00:00.143) 0:09:10.616 ********** 2026-04-06 05:16:46.357601 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357612 | orchestrator | 2026-04-06 05:16:46.357623 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:16:46.357634 | orchestrator | Monday 06 April 2026 05:16:41 +0000 (0:00:00.153) 0:09:10.769 ********** 2026-04-06 05:16:46.357644 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357655 | orchestrator | 2026-04-06 05:16:46.357666 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:16:46.357677 | orchestrator | Monday 06 April 2026 05:16:41 +0000 (0:00:00.135) 0:09:10.905 ********** 2026-04-06 05:16:46.357687 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357698 | orchestrator | 2026-04-06 05:16:46.357709 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:16:46.357720 | orchestrator | Monday 06 April 2026 05:16:41 +0000 (0:00:00.140) 0:09:11.045 ********** 2026-04-06 05:16:46.357731 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357741 | orchestrator | 2026-04-06 05:16:46.357752 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:16:46.357763 | orchestrator | Monday 06 April 2026 05:16:41 +0000 (0:00:00.222) 0:09:11.268 ********** 2026-04-06 05:16:46.357796 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357807 | orchestrator | 2026-04-06 05:16:46.357818 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:16:46.357829 | orchestrator | Monday 06 April 2026 05:16:41 +0000 (0:00:00.152) 0:09:11.420 ********** 2026-04-06 05:16:46.357840 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357850 | orchestrator | 2026-04-06 05:16:46.357861 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:16:46.357872 | orchestrator | Monday 06 April 2026 05:16:42 +0000 (0:00:00.880) 0:09:12.301 ********** 2026-04-06 05:16:46.357882 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357893 | orchestrator | 2026-04-06 05:16:46.357904 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:16:46.357914 | orchestrator | Monday 06 April 2026 05:16:42 +0000 (0:00:00.150) 0:09:12.452 ********** 2026-04-06 05:16:46.357925 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.357936 | orchestrator | 2026-04-06 05:16:46.357947 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:16:46.358002 | orchestrator | Monday 06 April 2026 05:16:42 +0000 (0:00:00.144) 0:09:12.596 ********** 2026-04-06 05:16:46.358076 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358100 | orchestrator | 2026-04-06 05:16:46.358130 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:16:46.358152 | orchestrator | Monday 06 April 2026 05:16:43 +0000 (0:00:00.159) 0:09:12.756 ********** 2026-04-06 05:16:46.358163 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358174 | orchestrator | 2026-04-06 05:16:46.358184 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:16:46.358195 | orchestrator | Monday 06 April 2026 05:16:43 +0000 (0:00:00.114) 0:09:12.871 ********** 2026-04-06 05:16:46.358205 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358216 | orchestrator | 2026-04-06 05:16:46.358227 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:16:46.358238 | orchestrator | Monday 06 April 2026 05:16:43 +0000 (0:00:00.173) 0:09:13.044 ********** 2026-04-06 05:16:46.358248 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358259 | orchestrator | 2026-04-06 05:16:46.358269 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:16:46.358289 | orchestrator | Monday 06 April 2026 05:16:43 +0000 (0:00:00.149) 0:09:13.194 ********** 2026-04-06 05:16:46.358300 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-06 05:16:46.358311 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-06 05:16:46.358322 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-06 05:16:46.358332 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358343 | orchestrator | 2026-04-06 05:16:46.358353 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:16:46.358364 | orchestrator | Monday 06 April 2026 05:16:43 +0000 (0:00:00.405) 0:09:13.599 ********** 2026-04-06 05:16:46.358375 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-06 05:16:46.358385 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-06 05:16:46.358396 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-06 05:16:46.358406 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358417 | orchestrator | 2026-04-06 05:16:46.358428 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:16:46.358443 | orchestrator | Monday 06 April 2026 05:16:44 +0000 (0:00:00.412) 0:09:14.011 ********** 2026-04-06 05:16:46.358462 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-06 05:16:46.358479 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-06 05:16:46.358496 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-06 05:16:46.358514 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358532 | orchestrator | 2026-04-06 05:16:46.358549 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:16:46.358566 | orchestrator | Monday 06 April 2026 05:16:44 +0000 (0:00:00.437) 0:09:14.449 ********** 2026-04-06 05:16:46.358582 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358600 | orchestrator | 2026-04-06 05:16:46.358617 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:16:46.358636 | orchestrator | Monday 06 April 2026 05:16:44 +0000 (0:00:00.136) 0:09:14.585 ********** 2026-04-06 05:16:46.358654 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-06 05:16:46.358671 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358689 | orchestrator | 2026-04-06 05:16:46.358707 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:16:46.358726 | orchestrator | Monday 06 April 2026 05:16:45 +0000 (0:00:00.333) 0:09:14.919 ********** 2026-04-06 05:16:46.358745 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358764 | orchestrator | 2026-04-06 05:16:46.358783 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:16:46.358802 | orchestrator | Monday 06 April 2026 05:16:45 +0000 (0:00:00.546) 0:09:15.466 ********** 2026-04-06 05:16:46.358820 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 05:16:46.358846 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 05:16:46.358857 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 05:16:46.358868 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358878 | orchestrator | 2026-04-06 05:16:46.358889 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-06 05:16:46.358900 | orchestrator | Monday 06 April 2026 05:16:46 +0000 (0:00:00.410) 0:09:15.876 ********** 2026-04-06 05:16:46.358911 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:16:46.358922 | orchestrator | 2026-04-06 05:16:46.358933 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-06 05:16:46.358943 | orchestrator | Monday 06 April 2026 05:16:46 +0000 (0:00:00.144) 0:09:16.020 ********** 2026-04-06 05:16:46.358995 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:17:08.885024 | orchestrator | 2026-04-06 05:17:08.885175 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-06 05:17:08.885206 | orchestrator | Monday 06 April 2026 05:16:46 +0000 (0:00:00.137) 0:09:16.158 ********** 2026-04-06 05:17:08.885227 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:17:08.885248 | orchestrator | 2026-04-06 05:17:08.885267 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-06 05:17:08.885280 | orchestrator | Monday 06 April 2026 05:16:46 +0000 (0:00:00.135) 0:09:16.294 ********** 2026-04-06 05:17:08.885299 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:17:08.885317 | orchestrator | 2026-04-06 05:17:08.885335 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-06 05:17:08.885353 | orchestrator | 2026-04-06 05:17:08.885371 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-06 05:17:08.885390 | orchestrator | Monday 06 April 2026 05:16:47 +0000 (0:00:00.617) 0:09:16.912 ********** 2026-04-06 05:17:08.885409 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:17:08.885427 | orchestrator | 2026-04-06 05:17:08.885447 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-06 05:17:08.885465 | orchestrator | Monday 06 April 2026 05:16:59 +0000 (0:00:12.011) 0:09:28.923 ********** 2026-04-06 05:17:08.885485 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:17:08.885506 | orchestrator | 2026-04-06 05:17:08.885525 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:17:08.885544 | orchestrator | Monday 06 April 2026 05:17:00 +0000 (0:00:01.534) 0:09:30.458 ********** 2026-04-06 05:17:08.885565 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-06 05:17:08.885585 | orchestrator | 2026-04-06 05:17:08.885606 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:17:08.885626 | orchestrator | Monday 06 April 2026 05:17:00 +0000 (0:00:00.235) 0:09:30.694 ********** 2026-04-06 05:17:08.885646 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:08.885661 | orchestrator | 2026-04-06 05:17:08.885674 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:17:08.885687 | orchestrator | Monday 06 April 2026 05:17:01 +0000 (0:00:00.780) 0:09:31.474 ********** 2026-04-06 05:17:08.885700 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:08.885712 | orchestrator | 2026-04-06 05:17:08.885726 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:17:08.885758 | orchestrator | Monday 06 April 2026 05:17:01 +0000 (0:00:00.141) 0:09:31.615 ********** 2026-04-06 05:17:08.885772 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:08.885785 | orchestrator | 2026-04-06 05:17:08.885798 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:17:08.885811 | orchestrator | Monday 06 April 2026 05:17:02 +0000 (0:00:00.472) 0:09:32.088 ********** 2026-04-06 05:17:08.885824 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:08.885837 | orchestrator | 2026-04-06 05:17:08.885850 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:17:08.885861 | orchestrator | Monday 06 April 2026 05:17:02 +0000 (0:00:00.148) 0:09:32.236 ********** 2026-04-06 05:17:08.885896 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:08.885907 | orchestrator | 2026-04-06 05:17:08.885918 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:17:08.885929 | orchestrator | Monday 06 April 2026 05:17:02 +0000 (0:00:00.132) 0:09:32.369 ********** 2026-04-06 05:17:08.885940 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:08.885951 | orchestrator | 2026-04-06 05:17:08.886003 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:17:08.886070 | orchestrator | Monday 06 April 2026 05:17:02 +0000 (0:00:00.175) 0:09:32.545 ********** 2026-04-06 05:17:08.886083 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:08.886094 | orchestrator | 2026-04-06 05:17:08.886105 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:17:08.886116 | orchestrator | Monday 06 April 2026 05:17:02 +0000 (0:00:00.153) 0:09:32.699 ********** 2026-04-06 05:17:08.886127 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:08.886138 | orchestrator | 2026-04-06 05:17:08.886148 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:17:08.886159 | orchestrator | Monday 06 April 2026 05:17:03 +0000 (0:00:00.157) 0:09:32.856 ********** 2026-04-06 05:17:08.886170 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:17:08.886181 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:17:08.886192 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:17:08.886203 | orchestrator | 2026-04-06 05:17:08.886213 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:17:08.886224 | orchestrator | Monday 06 April 2026 05:17:04 +0000 (0:00:01.028) 0:09:33.885 ********** 2026-04-06 05:17:08.886235 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:08.886246 | orchestrator | 2026-04-06 05:17:08.886256 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:17:08.886267 | orchestrator | Monday 06 April 2026 05:17:04 +0000 (0:00:00.248) 0:09:34.133 ********** 2026-04-06 05:17:08.886278 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:17:08.886289 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:17:08.886300 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:17:08.886311 | orchestrator | 2026-04-06 05:17:08.886322 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:17:08.886332 | orchestrator | Monday 06 April 2026 05:17:06 +0000 (0:00:02.103) 0:09:36.236 ********** 2026-04-06 05:17:08.886343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 05:17:08.886354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 05:17:08.886365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 05:17:08.886376 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:08.886387 | orchestrator | 2026-04-06 05:17:08.886418 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:17:08.886430 | orchestrator | Monday 06 April 2026 05:17:07 +0000 (0:00:00.803) 0:09:37.040 ********** 2026-04-06 05:17:08.886444 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:17:08.886458 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:17:08.886469 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:17:08.886490 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:08.886501 | orchestrator | 2026-04-06 05:17:08.886512 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:17:08.886523 | orchestrator | Monday 06 April 2026 05:17:08 +0000 (0:00:00.980) 0:09:38.021 ********** 2026-04-06 05:17:08.886537 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:08.886558 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:08.886570 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:08.886581 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:08.886593 | orchestrator | 2026-04-06 05:17:08.886603 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:17:08.886614 | orchestrator | Monday 06 April 2026 05:17:08 +0000 (0:00:00.465) 0:09:38.486 ********** 2026-04-06 05:17:08.886628 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:17:04.906127', 'end': '2026-04-06 05:17:04.945405', 'delta': '0:00:00.039278', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:17:08.886643 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:17:05.781684', 'end': '2026-04-06 05:17:05.831437', 'delta': '0:00:00.049753', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:17:08.886663 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:17:06.324149', 'end': '2026-04-06 05:17:06.369782', 'delta': '0:00:00.045633', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:17:12.583240 | orchestrator | 2026-04-06 05:17:12.583336 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:17:12.583351 | orchestrator | Monday 06 April 2026 05:17:08 +0000 (0:00:00.213) 0:09:38.700 ********** 2026-04-06 05:17:12.583362 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:12.583373 | orchestrator | 2026-04-06 05:17:12.583383 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:17:12.583393 | orchestrator | Monday 06 April 2026 05:17:09 +0000 (0:00:00.265) 0:09:38.965 ********** 2026-04-06 05:17:12.583402 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.583413 | orchestrator | 2026-04-06 05:17:12.583423 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:17:12.583433 | orchestrator | Monday 06 April 2026 05:17:09 +0000 (0:00:00.264) 0:09:39.230 ********** 2026-04-06 05:17:12.583442 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:12.583452 | orchestrator | 2026-04-06 05:17:12.583461 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:17:12.583471 | orchestrator | Monday 06 April 2026 05:17:09 +0000 (0:00:00.155) 0:09:39.385 ********** 2026-04-06 05:17:12.583480 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:12.583490 | orchestrator | 2026-04-06 05:17:12.583499 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:17:12.583524 | orchestrator | Monday 06 April 2026 05:17:10 +0000 (0:00:01.003) 0:09:40.389 ********** 2026-04-06 05:17:12.583534 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:12.583544 | orchestrator | 2026-04-06 05:17:12.583553 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:17:12.583563 | orchestrator | Monday 06 April 2026 05:17:10 +0000 (0:00:00.167) 0:09:40.556 ********** 2026-04-06 05:17:12.583573 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.583582 | orchestrator | 2026-04-06 05:17:12.583591 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:17:12.583601 | orchestrator | Monday 06 April 2026 05:17:10 +0000 (0:00:00.119) 0:09:40.676 ********** 2026-04-06 05:17:12.583610 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.583621 | orchestrator | 2026-04-06 05:17:12.583638 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:17:12.583654 | orchestrator | Monday 06 April 2026 05:17:11 +0000 (0:00:00.214) 0:09:40.890 ********** 2026-04-06 05:17:12.583671 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.583688 | orchestrator | 2026-04-06 05:17:12.583703 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:17:12.583719 | orchestrator | Monday 06 April 2026 05:17:11 +0000 (0:00:00.133) 0:09:41.024 ********** 2026-04-06 05:17:12.583734 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.583751 | orchestrator | 2026-04-06 05:17:12.583767 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:17:12.583784 | orchestrator | Monday 06 April 2026 05:17:11 +0000 (0:00:00.134) 0:09:41.159 ********** 2026-04-06 05:17:12.583802 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.583820 | orchestrator | 2026-04-06 05:17:12.583837 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:17:12.583854 | orchestrator | Monday 06 April 2026 05:17:11 +0000 (0:00:00.147) 0:09:41.306 ********** 2026-04-06 05:17:12.583871 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.583888 | orchestrator | 2026-04-06 05:17:12.583907 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:17:12.583924 | orchestrator | Monday 06 April 2026 05:17:11 +0000 (0:00:00.124) 0:09:41.431 ********** 2026-04-06 05:17:12.583942 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.584009 | orchestrator | 2026-04-06 05:17:12.584023 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:17:12.584035 | orchestrator | Monday 06 April 2026 05:17:11 +0000 (0:00:00.153) 0:09:41.584 ********** 2026-04-06 05:17:12.584046 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.584057 | orchestrator | 2026-04-06 05:17:12.584068 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:17:12.584080 | orchestrator | Monday 06 April 2026 05:17:12 +0000 (0:00:00.454) 0:09:42.038 ********** 2026-04-06 05:17:12.584091 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.584103 | orchestrator | 2026-04-06 05:17:12.584113 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:17:12.584125 | orchestrator | Monday 06 April 2026 05:17:12 +0000 (0:00:00.135) 0:09:42.173 ********** 2026-04-06 05:17:12.584138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:17:12.584152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:17:12.584181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:17:12.584193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:17:12.584212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:17:12.584223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:17:12.584232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:17:12.584262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23f8d4f9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:17:12.818150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:17:12.818252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:17:12.818286 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:12.818300 | orchestrator | 2026-04-06 05:17:12.818312 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:17:12.818324 | orchestrator | Monday 06 April 2026 05:17:12 +0000 (0:00:00.234) 0:09:42.408 ********** 2026-04-06 05:17:12.818338 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:12.818373 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:12.818386 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:12.818399 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:12.818430 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:12.818443 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:12.818461 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:12.818483 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23f8d4f9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:12.818506 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:24.029701 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:17:24.029813 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.029831 | orchestrator | 2026-04-06 05:17:24.029845 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:17:24.029858 | orchestrator | Monday 06 April 2026 05:17:12 +0000 (0:00:00.263) 0:09:42.672 ********** 2026-04-06 05:17:24.029870 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:24.029906 | orchestrator | 2026-04-06 05:17:24.029918 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:17:24.029928 | orchestrator | Monday 06 April 2026 05:17:13 +0000 (0:00:00.542) 0:09:43.215 ********** 2026-04-06 05:17:24.029939 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:24.029950 | orchestrator | 2026-04-06 05:17:24.030009 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:17:24.030073 | orchestrator | Monday 06 April 2026 05:17:13 +0000 (0:00:00.130) 0:09:43.346 ********** 2026-04-06 05:17:24.030084 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:24.030095 | orchestrator | 2026-04-06 05:17:24.030106 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:17:24.030117 | orchestrator | Monday 06 April 2026 05:17:14 +0000 (0:00:00.481) 0:09:43.827 ********** 2026-04-06 05:17:24.030128 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.030139 | orchestrator | 2026-04-06 05:17:24.030150 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:17:24.030160 | orchestrator | Monday 06 April 2026 05:17:14 +0000 (0:00:00.148) 0:09:43.975 ********** 2026-04-06 05:17:24.030171 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.030182 | orchestrator | 2026-04-06 05:17:24.030193 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:17:24.030204 | orchestrator | Monday 06 April 2026 05:17:14 +0000 (0:00:00.254) 0:09:44.229 ********** 2026-04-06 05:17:24.030214 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.030225 | orchestrator | 2026-04-06 05:17:24.030239 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:17:24.030252 | orchestrator | Monday 06 April 2026 05:17:14 +0000 (0:00:00.151) 0:09:44.381 ********** 2026-04-06 05:17:24.030265 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:17:24.030278 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-06 05:17:24.030291 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-06 05:17:24.030304 | orchestrator | 2026-04-06 05:17:24.030317 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:17:24.030331 | orchestrator | Monday 06 April 2026 05:17:15 +0000 (0:00:01.053) 0:09:45.434 ********** 2026-04-06 05:17:24.030344 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 05:17:24.030356 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 05:17:24.030369 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 05:17:24.030382 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.030394 | orchestrator | 2026-04-06 05:17:24.030407 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:17:24.030421 | orchestrator | Monday 06 April 2026 05:17:15 +0000 (0:00:00.168) 0:09:45.602 ********** 2026-04-06 05:17:24.030433 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.030446 | orchestrator | 2026-04-06 05:17:24.030459 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:17:24.030472 | orchestrator | Monday 06 April 2026 05:17:16 +0000 (0:00:00.157) 0:09:45.759 ********** 2026-04-06 05:17:24.030485 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:17:24.030498 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:17:24.030511 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:17:24.030523 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:17:24.030536 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:17:24.030549 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:17:24.030562 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:17:24.030576 | orchestrator | 2026-04-06 05:17:24.030589 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:17:24.030612 | orchestrator | Monday 06 April 2026 05:17:17 +0000 (0:00:01.468) 0:09:47.228 ********** 2026-04-06 05:17:24.030623 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:17:24.030634 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:17:24.030645 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:17:24.030656 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:17:24.030683 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:17:24.030695 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:17:24.030706 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:17:24.030717 | orchestrator | 2026-04-06 05:17:24.030728 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:17:24.030738 | orchestrator | Monday 06 April 2026 05:17:19 +0000 (0:00:01.694) 0:09:48.922 ********** 2026-04-06 05:17:24.030749 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-06 05:17:24.030761 | orchestrator | 2026-04-06 05:17:24.030772 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:17:24.030791 | orchestrator | Monday 06 April 2026 05:17:19 +0000 (0:00:00.204) 0:09:49.127 ********** 2026-04-06 05:17:24.030802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-06 05:17:24.030813 | orchestrator | 2026-04-06 05:17:24.030824 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:17:24.030835 | orchestrator | Monday 06 April 2026 05:17:19 +0000 (0:00:00.209) 0:09:49.336 ********** 2026-04-06 05:17:24.030846 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:24.030857 | orchestrator | 2026-04-06 05:17:24.030868 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:17:24.030879 | orchestrator | Monday 06 April 2026 05:17:20 +0000 (0:00:00.600) 0:09:49.936 ********** 2026-04-06 05:17:24.030890 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.030901 | orchestrator | 2026-04-06 05:17:24.030912 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:17:24.030923 | orchestrator | Monday 06 April 2026 05:17:20 +0000 (0:00:00.151) 0:09:50.088 ********** 2026-04-06 05:17:24.030933 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.030944 | orchestrator | 2026-04-06 05:17:24.030955 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:17:24.030999 | orchestrator | Monday 06 April 2026 05:17:20 +0000 (0:00:00.138) 0:09:50.227 ********** 2026-04-06 05:17:24.031010 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.031021 | orchestrator | 2026-04-06 05:17:24.031032 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:17:24.031043 | orchestrator | Monday 06 April 2026 05:17:20 +0000 (0:00:00.139) 0:09:50.367 ********** 2026-04-06 05:17:24.031054 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:24.031065 | orchestrator | 2026-04-06 05:17:24.031076 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:17:24.031086 | orchestrator | Monday 06 April 2026 05:17:21 +0000 (0:00:00.603) 0:09:50.970 ********** 2026-04-06 05:17:24.031097 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.031108 | orchestrator | 2026-04-06 05:17:24.031119 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:17:24.031129 | orchestrator | Monday 06 April 2026 05:17:21 +0000 (0:00:00.135) 0:09:51.106 ********** 2026-04-06 05:17:24.031140 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.031151 | orchestrator | 2026-04-06 05:17:24.031162 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:17:24.031184 | orchestrator | Monday 06 April 2026 05:17:21 +0000 (0:00:00.434) 0:09:51.540 ********** 2026-04-06 05:17:24.031195 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:24.031205 | orchestrator | 2026-04-06 05:17:24.031216 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:17:24.031227 | orchestrator | Monday 06 April 2026 05:17:22 +0000 (0:00:00.553) 0:09:52.093 ********** 2026-04-06 05:17:24.031238 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:24.031249 | orchestrator | 2026-04-06 05:17:24.031259 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:17:24.031270 | orchestrator | Monday 06 April 2026 05:17:22 +0000 (0:00:00.564) 0:09:52.657 ********** 2026-04-06 05:17:24.031281 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.031292 | orchestrator | 2026-04-06 05:17:24.031303 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:17:24.031313 | orchestrator | Monday 06 April 2026 05:17:23 +0000 (0:00:00.182) 0:09:52.840 ********** 2026-04-06 05:17:24.031324 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:24.031335 | orchestrator | 2026-04-06 05:17:24.031345 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:17:24.031356 | orchestrator | Monday 06 April 2026 05:17:23 +0000 (0:00:00.167) 0:09:53.008 ********** 2026-04-06 05:17:24.031367 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.031378 | orchestrator | 2026-04-06 05:17:24.031388 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:17:24.031399 | orchestrator | Monday 06 April 2026 05:17:23 +0000 (0:00:00.134) 0:09:53.143 ********** 2026-04-06 05:17:24.031410 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.031421 | orchestrator | 2026-04-06 05:17:24.031432 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:17:24.031443 | orchestrator | Monday 06 April 2026 05:17:23 +0000 (0:00:00.135) 0:09:53.278 ********** 2026-04-06 05:17:24.031454 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.031464 | orchestrator | 2026-04-06 05:17:24.031475 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:17:24.031486 | orchestrator | Monday 06 April 2026 05:17:23 +0000 (0:00:00.133) 0:09:53.412 ********** 2026-04-06 05:17:24.031497 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.031508 | orchestrator | 2026-04-06 05:17:24.031518 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:17:24.031529 | orchestrator | Monday 06 April 2026 05:17:23 +0000 (0:00:00.140) 0:09:53.552 ********** 2026-04-06 05:17:24.031540 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:24.031551 | orchestrator | 2026-04-06 05:17:24.031561 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:17:24.031572 | orchestrator | Monday 06 April 2026 05:17:23 +0000 (0:00:00.120) 0:09:53.672 ********** 2026-04-06 05:17:24.031589 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:36.528734 | orchestrator | 2026-04-06 05:17:36.528839 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:17:36.528855 | orchestrator | Monday 06 April 2026 05:17:24 +0000 (0:00:00.155) 0:09:53.828 ********** 2026-04-06 05:17:36.528865 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:36.528877 | orchestrator | 2026-04-06 05:17:36.528887 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:17:36.528897 | orchestrator | Monday 06 April 2026 05:17:24 +0000 (0:00:00.153) 0:09:53.981 ********** 2026-04-06 05:17:36.528907 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:36.528917 | orchestrator | 2026-04-06 05:17:36.528927 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:17:36.528937 | orchestrator | Monday 06 April 2026 05:17:24 +0000 (0:00:00.529) 0:09:54.510 ********** 2026-04-06 05:17:36.528961 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529022 | orchestrator | 2026-04-06 05:17:36.529033 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:17:36.529064 | orchestrator | Monday 06 April 2026 05:17:24 +0000 (0:00:00.146) 0:09:54.657 ********** 2026-04-06 05:17:36.529074 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529084 | orchestrator | 2026-04-06 05:17:36.529093 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:17:36.529103 | orchestrator | Monday 06 April 2026 05:17:25 +0000 (0:00:00.132) 0:09:54.790 ********** 2026-04-06 05:17:36.529112 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529122 | orchestrator | 2026-04-06 05:17:36.529132 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:17:36.529141 | orchestrator | Monday 06 April 2026 05:17:25 +0000 (0:00:00.158) 0:09:54.948 ********** 2026-04-06 05:17:36.529151 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529160 | orchestrator | 2026-04-06 05:17:36.529170 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:17:36.529179 | orchestrator | Monday 06 April 2026 05:17:25 +0000 (0:00:00.139) 0:09:55.088 ********** 2026-04-06 05:17:36.529189 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529198 | orchestrator | 2026-04-06 05:17:36.529208 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:17:36.529218 | orchestrator | Monday 06 April 2026 05:17:25 +0000 (0:00:00.141) 0:09:55.230 ********** 2026-04-06 05:17:36.529227 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529237 | orchestrator | 2026-04-06 05:17:36.529247 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:17:36.529257 | orchestrator | Monday 06 April 2026 05:17:25 +0000 (0:00:00.149) 0:09:55.380 ********** 2026-04-06 05:17:36.529266 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529277 | orchestrator | 2026-04-06 05:17:36.529288 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:17:36.529301 | orchestrator | Monday 06 April 2026 05:17:25 +0000 (0:00:00.140) 0:09:55.520 ********** 2026-04-06 05:17:36.529312 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529323 | orchestrator | 2026-04-06 05:17:36.529335 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:17:36.529345 | orchestrator | Monday 06 April 2026 05:17:25 +0000 (0:00:00.132) 0:09:55.653 ********** 2026-04-06 05:17:36.529356 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529368 | orchestrator | 2026-04-06 05:17:36.529379 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:17:36.529390 | orchestrator | Monday 06 April 2026 05:17:26 +0000 (0:00:00.143) 0:09:55.796 ********** 2026-04-06 05:17:36.529402 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529412 | orchestrator | 2026-04-06 05:17:36.529423 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:17:36.529434 | orchestrator | Monday 06 April 2026 05:17:26 +0000 (0:00:00.119) 0:09:55.916 ********** 2026-04-06 05:17:36.529446 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529457 | orchestrator | 2026-04-06 05:17:36.529468 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:17:36.529479 | orchestrator | Monday 06 April 2026 05:17:26 +0000 (0:00:00.131) 0:09:56.048 ********** 2026-04-06 05:17:36.529490 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529501 | orchestrator | 2026-04-06 05:17:36.529512 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:17:36.529523 | orchestrator | Monday 06 April 2026 05:17:26 +0000 (0:00:00.523) 0:09:56.572 ********** 2026-04-06 05:17:36.529535 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:36.529546 | orchestrator | 2026-04-06 05:17:36.529557 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:17:36.529568 | orchestrator | Monday 06 April 2026 05:17:27 +0000 (0:00:00.942) 0:09:57.514 ********** 2026-04-06 05:17:36.529579 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:36.529590 | orchestrator | 2026-04-06 05:17:36.529602 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:17:36.529620 | orchestrator | Monday 06 April 2026 05:17:29 +0000 (0:00:01.535) 0:09:59.050 ********** 2026-04-06 05:17:36.529632 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-06 05:17:36.529643 | orchestrator | 2026-04-06 05:17:36.529652 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:17:36.529662 | orchestrator | Monday 06 April 2026 05:17:29 +0000 (0:00:00.225) 0:09:59.276 ********** 2026-04-06 05:17:36.529671 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529681 | orchestrator | 2026-04-06 05:17:36.529691 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:17:36.529700 | orchestrator | Monday 06 April 2026 05:17:29 +0000 (0:00:00.135) 0:09:59.411 ********** 2026-04-06 05:17:36.529710 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529719 | orchestrator | 2026-04-06 05:17:36.529729 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:17:36.529739 | orchestrator | Monday 06 April 2026 05:17:29 +0000 (0:00:00.129) 0:09:59.541 ********** 2026-04-06 05:17:36.529764 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:17:36.529774 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:17:36.529784 | orchestrator | 2026-04-06 05:17:36.529794 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:17:36.529803 | orchestrator | Monday 06 April 2026 05:17:30 +0000 (0:00:00.875) 0:10:00.417 ********** 2026-04-06 05:17:36.529813 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:36.529823 | orchestrator | 2026-04-06 05:17:36.529832 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:17:36.529842 | orchestrator | Monday 06 April 2026 05:17:31 +0000 (0:00:00.482) 0:10:00.899 ********** 2026-04-06 05:17:36.529851 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529861 | orchestrator | 2026-04-06 05:17:36.529876 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:17:36.529886 | orchestrator | Monday 06 April 2026 05:17:31 +0000 (0:00:00.143) 0:10:01.042 ********** 2026-04-06 05:17:36.529896 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529905 | orchestrator | 2026-04-06 05:17:36.529915 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:17:36.529924 | orchestrator | Monday 06 April 2026 05:17:31 +0000 (0:00:00.141) 0:10:01.183 ********** 2026-04-06 05:17:36.529934 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.529944 | orchestrator | 2026-04-06 05:17:36.529953 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:17:36.530008 | orchestrator | Monday 06 April 2026 05:17:31 +0000 (0:00:00.118) 0:10:01.302 ********** 2026-04-06 05:17:36.530109 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-06 05:17:36.530120 | orchestrator | 2026-04-06 05:17:36.530130 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:17:36.530139 | orchestrator | Monday 06 April 2026 05:17:31 +0000 (0:00:00.228) 0:10:01.530 ********** 2026-04-06 05:17:36.530149 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:36.530159 | orchestrator | 2026-04-06 05:17:36.530168 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:17:36.530178 | orchestrator | Monday 06 April 2026 05:17:32 +0000 (0:00:01.018) 0:10:02.549 ********** 2026-04-06 05:17:36.530187 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:17:36.530197 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:17:36.530206 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:17:36.530216 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.530226 | orchestrator | 2026-04-06 05:17:36.530235 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:17:36.530253 | orchestrator | Monday 06 April 2026 05:17:33 +0000 (0:00:00.178) 0:10:02.727 ********** 2026-04-06 05:17:36.530263 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.530272 | orchestrator | 2026-04-06 05:17:36.530282 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:17:36.530292 | orchestrator | Monday 06 April 2026 05:17:33 +0000 (0:00:00.167) 0:10:02.895 ********** 2026-04-06 05:17:36.530302 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.530311 | orchestrator | 2026-04-06 05:17:36.530321 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:17:36.530330 | orchestrator | Monday 06 April 2026 05:17:33 +0000 (0:00:00.168) 0:10:03.063 ********** 2026-04-06 05:17:36.530340 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.530349 | orchestrator | 2026-04-06 05:17:36.530359 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:17:36.530369 | orchestrator | Monday 06 April 2026 05:17:33 +0000 (0:00:00.149) 0:10:03.213 ********** 2026-04-06 05:17:36.530378 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.530388 | orchestrator | 2026-04-06 05:17:36.530397 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:17:36.530407 | orchestrator | Monday 06 April 2026 05:17:33 +0000 (0:00:00.151) 0:10:03.365 ********** 2026-04-06 05:17:36.530416 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.530426 | orchestrator | 2026-04-06 05:17:36.530435 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:17:36.530445 | orchestrator | Monday 06 April 2026 05:17:33 +0000 (0:00:00.178) 0:10:03.544 ********** 2026-04-06 05:17:36.530455 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:36.530464 | orchestrator | 2026-04-06 05:17:36.530474 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:17:36.530483 | orchestrator | Monday 06 April 2026 05:17:35 +0000 (0:00:01.577) 0:10:05.121 ********** 2026-04-06 05:17:36.530493 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:36.530502 | orchestrator | 2026-04-06 05:17:36.530512 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:17:36.530521 | orchestrator | Monday 06 April 2026 05:17:35 +0000 (0:00:00.151) 0:10:05.273 ********** 2026-04-06 05:17:36.530531 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-06 05:17:36.530541 | orchestrator | 2026-04-06 05:17:36.530550 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:17:36.530560 | orchestrator | Monday 06 April 2026 05:17:35 +0000 (0:00:00.234) 0:10:05.508 ********** 2026-04-06 05:17:36.530569 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.530579 | orchestrator | 2026-04-06 05:17:36.530589 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:17:36.530598 | orchestrator | Monday 06 April 2026 05:17:35 +0000 (0:00:00.145) 0:10:05.653 ********** 2026-04-06 05:17:36.530608 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.530617 | orchestrator | 2026-04-06 05:17:36.530627 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:17:36.530636 | orchestrator | Monday 06 April 2026 05:17:36 +0000 (0:00:00.152) 0:10:05.806 ********** 2026-04-06 05:17:36.530646 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:36.530656 | orchestrator | 2026-04-06 05:17:36.530673 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:17:48.438962 | orchestrator | Monday 06 April 2026 05:17:36 +0000 (0:00:00.433) 0:10:06.240 ********** 2026-04-06 05:17:48.439142 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.439162 | orchestrator | 2026-04-06 05:17:48.439179 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:17:48.439199 | orchestrator | Monday 06 April 2026 05:17:36 +0000 (0:00:00.158) 0:10:06.398 ********** 2026-04-06 05:17:48.439217 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.439238 | orchestrator | 2026-04-06 05:17:48.439277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:17:48.439289 | orchestrator | Monday 06 April 2026 05:17:36 +0000 (0:00:00.153) 0:10:06.553 ********** 2026-04-06 05:17:48.439316 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.439329 | orchestrator | 2026-04-06 05:17:48.439349 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:17:48.439366 | orchestrator | Monday 06 April 2026 05:17:36 +0000 (0:00:00.163) 0:10:06.716 ********** 2026-04-06 05:17:48.439386 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.439405 | orchestrator | 2026-04-06 05:17:48.439417 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:17:48.439428 | orchestrator | Monday 06 April 2026 05:17:37 +0000 (0:00:00.137) 0:10:06.853 ********** 2026-04-06 05:17:48.439439 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.439450 | orchestrator | 2026-04-06 05:17:48.439461 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:17:48.439472 | orchestrator | Monday 06 April 2026 05:17:37 +0000 (0:00:00.152) 0:10:07.006 ********** 2026-04-06 05:17:48.439484 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:17:48.439504 | orchestrator | 2026-04-06 05:17:48.439521 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:17:48.439541 | orchestrator | Monday 06 April 2026 05:17:37 +0000 (0:00:00.231) 0:10:07.237 ********** 2026-04-06 05:17:48.439561 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-06 05:17:48.439584 | orchestrator | 2026-04-06 05:17:48.439602 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:17:48.439621 | orchestrator | Monday 06 April 2026 05:17:37 +0000 (0:00:00.202) 0:10:07.440 ********** 2026-04-06 05:17:48.439640 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-06 05:17:48.439660 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-06 05:17:48.439679 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-06 05:17:48.439699 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-06 05:17:48.439717 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-06 05:17:48.439736 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-06 05:17:48.439754 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-06 05:17:48.439773 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:17:48.439791 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:17:48.439811 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:17:48.439828 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:17:48.439843 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:17:48.439862 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:17:48.439879 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:17:48.439899 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-06 05:17:48.439916 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-06 05:17:48.439934 | orchestrator | 2026-04-06 05:17:48.439952 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:17:48.439997 | orchestrator | Monday 06 April 2026 05:17:43 +0000 (0:00:05.745) 0:10:13.185 ********** 2026-04-06 05:17:48.440017 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440036 | orchestrator | 2026-04-06 05:17:48.440055 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:17:48.440073 | orchestrator | Monday 06 April 2026 05:17:43 +0000 (0:00:00.125) 0:10:13.311 ********** 2026-04-06 05:17:48.440092 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440110 | orchestrator | 2026-04-06 05:17:48.440127 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:17:48.440159 | orchestrator | Monday 06 April 2026 05:17:43 +0000 (0:00:00.142) 0:10:13.454 ********** 2026-04-06 05:17:48.440178 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440196 | orchestrator | 2026-04-06 05:17:48.440213 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:17:48.440232 | orchestrator | Monday 06 April 2026 05:17:44 +0000 (0:00:00.411) 0:10:13.865 ********** 2026-04-06 05:17:48.440250 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440268 | orchestrator | 2026-04-06 05:17:48.440286 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:17:48.440305 | orchestrator | Monday 06 April 2026 05:17:44 +0000 (0:00:00.144) 0:10:14.010 ********** 2026-04-06 05:17:48.440324 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440336 | orchestrator | 2026-04-06 05:17:48.440347 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:17:48.440358 | orchestrator | Monday 06 April 2026 05:17:44 +0000 (0:00:00.145) 0:10:14.155 ********** 2026-04-06 05:17:48.440368 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440379 | orchestrator | 2026-04-06 05:17:48.440390 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:17:48.440401 | orchestrator | Monday 06 April 2026 05:17:44 +0000 (0:00:00.133) 0:10:14.289 ********** 2026-04-06 05:17:48.440411 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440422 | orchestrator | 2026-04-06 05:17:48.440451 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:17:48.440462 | orchestrator | Monday 06 April 2026 05:17:44 +0000 (0:00:00.150) 0:10:14.439 ********** 2026-04-06 05:17:48.440473 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440484 | orchestrator | 2026-04-06 05:17:48.440495 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:17:48.440505 | orchestrator | Monday 06 April 2026 05:17:44 +0000 (0:00:00.142) 0:10:14.582 ********** 2026-04-06 05:17:48.440516 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440527 | orchestrator | 2026-04-06 05:17:48.440537 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:17:48.440555 | orchestrator | Monday 06 April 2026 05:17:45 +0000 (0:00:00.138) 0:10:14.720 ********** 2026-04-06 05:17:48.440566 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440576 | orchestrator | 2026-04-06 05:17:48.440587 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:17:48.440598 | orchestrator | Monday 06 April 2026 05:17:45 +0000 (0:00:00.154) 0:10:14.875 ********** 2026-04-06 05:17:48.440608 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440619 | orchestrator | 2026-04-06 05:17:48.440630 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:17:48.440641 | orchestrator | Monday 06 April 2026 05:17:45 +0000 (0:00:00.134) 0:10:15.010 ********** 2026-04-06 05:17:48.440651 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440662 | orchestrator | 2026-04-06 05:17:48.440673 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:17:48.440683 | orchestrator | Monday 06 April 2026 05:17:45 +0000 (0:00:00.127) 0:10:15.137 ********** 2026-04-06 05:17:48.440694 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440705 | orchestrator | 2026-04-06 05:17:48.440716 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:17:48.440726 | orchestrator | Monday 06 April 2026 05:17:45 +0000 (0:00:00.241) 0:10:15.378 ********** 2026-04-06 05:17:48.440737 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440748 | orchestrator | 2026-04-06 05:17:48.440758 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:17:48.440769 | orchestrator | Monday 06 April 2026 05:17:45 +0000 (0:00:00.134) 0:10:15.513 ********** 2026-04-06 05:17:48.440780 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440798 | orchestrator | 2026-04-06 05:17:48.440809 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:17:48.440819 | orchestrator | Monday 06 April 2026 05:17:46 +0000 (0:00:00.227) 0:10:15.741 ********** 2026-04-06 05:17:48.440830 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440841 | orchestrator | 2026-04-06 05:17:48.440852 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:17:48.440862 | orchestrator | Monday 06 April 2026 05:17:46 +0000 (0:00:00.437) 0:10:16.179 ********** 2026-04-06 05:17:48.440873 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440884 | orchestrator | 2026-04-06 05:17:48.440895 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:17:48.440907 | orchestrator | Monday 06 April 2026 05:17:46 +0000 (0:00:00.127) 0:10:16.306 ********** 2026-04-06 05:17:48.440918 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440928 | orchestrator | 2026-04-06 05:17:48.440939 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:17:48.440950 | orchestrator | Monday 06 April 2026 05:17:46 +0000 (0:00:00.138) 0:10:16.445 ********** 2026-04-06 05:17:48.440960 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.440997 | orchestrator | 2026-04-06 05:17:48.441009 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:17:48.441020 | orchestrator | Monday 06 April 2026 05:17:46 +0000 (0:00:00.141) 0:10:16.587 ********** 2026-04-06 05:17:48.441031 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.441041 | orchestrator | 2026-04-06 05:17:48.441052 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:17:48.441063 | orchestrator | Monday 06 April 2026 05:17:47 +0000 (0:00:00.135) 0:10:16.722 ********** 2026-04-06 05:17:48.441074 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.441085 | orchestrator | 2026-04-06 05:17:48.441095 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:17:48.441106 | orchestrator | Monday 06 April 2026 05:17:47 +0000 (0:00:00.155) 0:10:16.878 ********** 2026-04-06 05:17:48.441117 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-06 05:17:48.441127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 05:17:48.441138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 05:17:48.441149 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.441159 | orchestrator | 2026-04-06 05:17:48.441170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:17:48.441181 | orchestrator | Monday 06 April 2026 05:17:47 +0000 (0:00:00.429) 0:10:17.307 ********** 2026-04-06 05:17:48.441191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-06 05:17:48.441202 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 05:17:48.441213 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 05:17:48.441224 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.441235 | orchestrator | 2026-04-06 05:17:48.441245 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:17:48.441256 | orchestrator | Monday 06 April 2026 05:17:47 +0000 (0:00:00.401) 0:10:17.709 ********** 2026-04-06 05:17:48.441267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-06 05:17:48.441278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 05:17:48.441288 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 05:17:48.441300 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:17:48.441319 | orchestrator | 2026-04-06 05:17:48.441347 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:18:28.792581 | orchestrator | Monday 06 April 2026 05:17:48 +0000 (0:00:00.438) 0:10:18.147 ********** 2026-04-06 05:18:28.792693 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:18:28.792710 | orchestrator | 2026-04-06 05:18:28.792747 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:18:28.792760 | orchestrator | Monday 06 April 2026 05:17:48 +0000 (0:00:00.142) 0:10:18.289 ********** 2026-04-06 05:18:28.792771 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-06 05:18:28.792783 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:18:28.792794 | orchestrator | 2026-04-06 05:18:28.792805 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:18:28.792832 | orchestrator | Monday 06 April 2026 05:17:48 +0000 (0:00:00.310) 0:10:18.600 ********** 2026-04-06 05:18:28.792844 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:18:28.792855 | orchestrator | 2026-04-06 05:18:28.792866 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:18:28.792877 | orchestrator | Monday 06 April 2026 05:17:49 +0000 (0:00:00.836) 0:10:19.436 ********** 2026-04-06 05:18:28.792888 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:18:28.792899 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:18:28.792910 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:18:28.792921 | orchestrator | 2026-04-06 05:18:28.792932 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-06 05:18:28.792943 | orchestrator | Monday 06 April 2026 05:17:51 +0000 (0:00:01.302) 0:10:20.738 ********** 2026-04-06 05:18:28.792953 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-04-06 05:18:28.792964 | orchestrator | 2026-04-06 05:18:28.792975 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-06 05:18:28.793034 | orchestrator | Monday 06 April 2026 05:17:51 +0000 (0:00:00.568) 0:10:21.307 ********** 2026-04-06 05:18:28.793045 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:18:28.793056 | orchestrator | 2026-04-06 05:18:28.793067 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-06 05:18:28.793079 | orchestrator | Monday 06 April 2026 05:17:52 +0000 (0:00:00.509) 0:10:21.816 ********** 2026-04-06 05:18:28.793090 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:18:28.793101 | orchestrator | 2026-04-06 05:18:28.793112 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-06 05:18:28.793124 | orchestrator | Monday 06 April 2026 05:17:52 +0000 (0:00:00.142) 0:10:21.958 ********** 2026-04-06 05:18:28.793138 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-06 05:18:28.793152 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-06 05:18:28.793164 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-06 05:18:28.793178 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-06 05:18:28.793189 | orchestrator | 2026-04-06 05:18:28.793200 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-06 05:18:28.793210 | orchestrator | Monday 06 April 2026 05:17:58 +0000 (0:00:06.134) 0:10:28.092 ********** 2026-04-06 05:18:28.793221 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:18:28.793232 | orchestrator | 2026-04-06 05:18:28.793243 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-06 05:18:28.793254 | orchestrator | Monday 06 April 2026 05:17:58 +0000 (0:00:00.175) 0:10:28.268 ********** 2026-04-06 05:18:28.793265 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-06 05:18:28.793281 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 05:18:28.793299 | orchestrator | 2026-04-06 05:18:28.793317 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:18:28.793335 | orchestrator | Monday 06 April 2026 05:18:00 +0000 (0:00:02.290) 0:10:30.559 ********** 2026-04-06 05:18:28.793353 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-06 05:18:28.793371 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-06 05:18:28.793387 | orchestrator | 2026-04-06 05:18:28.793405 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-06 05:18:28.793437 | orchestrator | Monday 06 April 2026 05:18:01 +0000 (0:00:01.051) 0:10:31.611 ********** 2026-04-06 05:18:28.793457 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:18:28.793476 | orchestrator | 2026-04-06 05:18:28.793494 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-06 05:18:28.793513 | orchestrator | Monday 06 April 2026 05:18:02 +0000 (0:00:00.538) 0:10:32.149 ********** 2026-04-06 05:18:28.793526 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:18:28.793537 | orchestrator | 2026-04-06 05:18:28.793548 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-06 05:18:28.793559 | orchestrator | Monday 06 April 2026 05:18:02 +0000 (0:00:00.136) 0:10:32.286 ********** 2026-04-06 05:18:28.793569 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:18:28.793580 | orchestrator | 2026-04-06 05:18:28.793591 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-06 05:18:28.793602 | orchestrator | Monday 06 April 2026 05:18:02 +0000 (0:00:00.131) 0:10:32.417 ********** 2026-04-06 05:18:28.793613 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-04-06 05:18:28.793623 | orchestrator | 2026-04-06 05:18:28.793634 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-06 05:18:28.793645 | orchestrator | Monday 06 April 2026 05:18:03 +0000 (0:00:00.860) 0:10:33.278 ********** 2026-04-06 05:18:28.793655 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:18:28.793666 | orchestrator | 2026-04-06 05:18:28.793677 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-06 05:18:28.793687 | orchestrator | Monday 06 April 2026 05:18:03 +0000 (0:00:00.160) 0:10:33.439 ********** 2026-04-06 05:18:28.793698 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:18:28.793709 | orchestrator | 2026-04-06 05:18:28.793720 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-06 05:18:28.793750 | orchestrator | Monday 06 April 2026 05:18:03 +0000 (0:00:00.154) 0:10:33.593 ********** 2026-04-06 05:18:28.793762 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-04-06 05:18:28.793772 | orchestrator | 2026-04-06 05:18:28.793783 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-06 05:18:28.793794 | orchestrator | Monday 06 April 2026 05:18:04 +0000 (0:00:00.580) 0:10:34.174 ********** 2026-04-06 05:18:28.793805 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:18:28.793815 | orchestrator | 2026-04-06 05:18:28.793826 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-06 05:18:28.793845 | orchestrator | Monday 06 April 2026 05:18:05 +0000 (0:00:01.108) 0:10:35.282 ********** 2026-04-06 05:18:28.793856 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:18:28.793867 | orchestrator | 2026-04-06 05:18:28.793877 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-06 05:18:28.793888 | orchestrator | Monday 06 April 2026 05:18:06 +0000 (0:00:00.904) 0:10:36.187 ********** 2026-04-06 05:18:28.793899 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:18:28.793909 | orchestrator | 2026-04-06 05:18:28.793920 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-06 05:18:28.793931 | orchestrator | Monday 06 April 2026 05:18:07 +0000 (0:00:01.466) 0:10:37.653 ********** 2026-04-06 05:18:28.793942 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:18:28.793952 | orchestrator | 2026-04-06 05:18:28.793963 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-06 05:18:28.793974 | orchestrator | Monday 06 April 2026 05:18:10 +0000 (0:00:03.038) 0:10:40.691 ********** 2026-04-06 05:18:28.794011 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:18:28.794078 | orchestrator | 2026-04-06 05:18:28.794089 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-06 05:18:28.794100 | orchestrator | 2026-04-06 05:18:28.794111 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-06 05:18:28.794122 | orchestrator | Monday 06 April 2026 05:18:11 +0000 (0:00:00.612) 0:10:41.304 ********** 2026-04-06 05:18:28.794141 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:18:28.794152 | orchestrator | 2026-04-06 05:18:28.794163 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-06 05:18:28.794174 | orchestrator | Monday 06 April 2026 05:18:23 +0000 (0:00:11.882) 0:10:53.186 ********** 2026-04-06 05:18:28.794185 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:18:28.794196 | orchestrator | 2026-04-06 05:18:28.794206 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:18:28.794217 | orchestrator | Monday 06 April 2026 05:18:25 +0000 (0:00:01.921) 0:10:55.108 ********** 2026-04-06 05:18:28.794228 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-06 05:18:28.794239 | orchestrator | 2026-04-06 05:18:28.794250 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:18:28.794261 | orchestrator | Monday 06 April 2026 05:18:25 +0000 (0:00:00.266) 0:10:55.375 ********** 2026-04-06 05:18:28.794272 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:28.794283 | orchestrator | 2026-04-06 05:18:28.794293 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:18:28.794304 | orchestrator | Monday 06 April 2026 05:18:26 +0000 (0:00:00.482) 0:10:55.857 ********** 2026-04-06 05:18:28.794315 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:28.794326 | orchestrator | 2026-04-06 05:18:28.794337 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:18:28.794348 | orchestrator | Monday 06 April 2026 05:18:26 +0000 (0:00:00.127) 0:10:55.985 ********** 2026-04-06 05:18:28.794359 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:28.794370 | orchestrator | 2026-04-06 05:18:28.794380 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:18:28.794391 | orchestrator | Monday 06 April 2026 05:18:26 +0000 (0:00:00.456) 0:10:56.442 ********** 2026-04-06 05:18:28.794402 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:28.794413 | orchestrator | 2026-04-06 05:18:28.794424 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:18:28.794434 | orchestrator | Monday 06 April 2026 05:18:26 +0000 (0:00:00.139) 0:10:56.581 ********** 2026-04-06 05:18:28.794445 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:28.794456 | orchestrator | 2026-04-06 05:18:28.794467 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:18:28.794478 | orchestrator | Monday 06 April 2026 05:18:27 +0000 (0:00:00.151) 0:10:56.732 ********** 2026-04-06 05:18:28.794489 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:28.794500 | orchestrator | 2026-04-06 05:18:28.794511 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:18:28.794522 | orchestrator | Monday 06 April 2026 05:18:27 +0000 (0:00:00.157) 0:10:56.890 ********** 2026-04-06 05:18:28.794532 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:28.794543 | orchestrator | 2026-04-06 05:18:28.794554 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:18:28.794565 | orchestrator | Monday 06 April 2026 05:18:27 +0000 (0:00:00.160) 0:10:57.050 ********** 2026-04-06 05:18:28.794576 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:28.794587 | orchestrator | 2026-04-06 05:18:28.794598 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:18:28.794609 | orchestrator | Monday 06 April 2026 05:18:27 +0000 (0:00:00.139) 0:10:57.190 ********** 2026-04-06 05:18:28.794620 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:18:28.794631 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:18:28.794642 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:18:28.794652 | orchestrator | 2026-04-06 05:18:28.794663 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:18:28.794674 | orchestrator | Monday 06 April 2026 05:18:28 +0000 (0:00:01.069) 0:10:58.259 ********** 2026-04-06 05:18:28.794692 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:28.794703 | orchestrator | 2026-04-06 05:18:28.794714 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:18:28.794732 | orchestrator | Monday 06 April 2026 05:18:28 +0000 (0:00:00.244) 0:10:58.504 ********** 2026-04-06 05:18:36.124344 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:18:36.124451 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:18:36.124467 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:18:36.124479 | orchestrator | 2026-04-06 05:18:36.124491 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:18:36.124503 | orchestrator | Monday 06 April 2026 05:18:31 +0000 (0:00:02.538) 0:11:01.043 ********** 2026-04-06 05:18:36.124531 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 05:18:36.124543 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 05:18:36.124554 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 05:18:36.124565 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.124576 | orchestrator | 2026-04-06 05:18:36.124587 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:18:36.124598 | orchestrator | Monday 06 April 2026 05:18:31 +0000 (0:00:00.442) 0:11:01.485 ********** 2026-04-06 05:18:36.124610 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:18:36.124624 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:18:36.124636 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:18:36.124647 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.124658 | orchestrator | 2026-04-06 05:18:36.124669 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:18:36.124680 | orchestrator | Monday 06 April 2026 05:18:32 +0000 (0:00:00.622) 0:11:02.108 ********** 2026-04-06 05:18:36.124694 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:36.124707 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:36.124719 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:36.124730 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.124741 | orchestrator | 2026-04-06 05:18:36.124752 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:18:36.124803 | orchestrator | Monday 06 April 2026 05:18:32 +0000 (0:00:00.166) 0:11:02.274 ********** 2026-04-06 05:18:36.124829 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:18:29.681482', 'end': '2026-04-06 05:18:29.732794', 'delta': '0:00:00.051312', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:18:36.124869 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:18:30.255682', 'end': '2026-04-06 05:18:30.304180', 'delta': '0:00:00.048498', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:18:36.124885 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:18:30.858207', 'end': '2026-04-06 05:18:30.907530', 'delta': '0:00:00.049323', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:18:36.124898 | orchestrator | 2026-04-06 05:18:36.124911 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:18:36.124924 | orchestrator | Monday 06 April 2026 05:18:32 +0000 (0:00:00.196) 0:11:02.470 ********** 2026-04-06 05:18:36.124938 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:36.124960 | orchestrator | 2026-04-06 05:18:36.124974 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:18:36.125009 | orchestrator | Monday 06 April 2026 05:18:33 +0000 (0:00:00.264) 0:11:02.735 ********** 2026-04-06 05:18:36.125022 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.125037 | orchestrator | 2026-04-06 05:18:36.125049 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:18:36.125060 | orchestrator | Monday 06 April 2026 05:18:33 +0000 (0:00:00.260) 0:11:02.996 ********** 2026-04-06 05:18:36.125071 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:36.125081 | orchestrator | 2026-04-06 05:18:36.125092 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:18:36.125103 | orchestrator | Monday 06 April 2026 05:18:33 +0000 (0:00:00.145) 0:11:03.141 ********** 2026-04-06 05:18:36.125114 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:18:36.125124 | orchestrator | 2026-04-06 05:18:36.125135 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:18:36.125146 | orchestrator | Monday 06 April 2026 05:18:34 +0000 (0:00:00.995) 0:11:04.137 ********** 2026-04-06 05:18:36.125166 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:36.125177 | orchestrator | 2026-04-06 05:18:36.125188 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:18:36.125199 | orchestrator | Monday 06 April 2026 05:18:34 +0000 (0:00:00.152) 0:11:04.289 ********** 2026-04-06 05:18:36.125210 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.125220 | orchestrator | 2026-04-06 05:18:36.125231 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:18:36.125242 | orchestrator | Monday 06 April 2026 05:18:34 +0000 (0:00:00.132) 0:11:04.421 ********** 2026-04-06 05:18:36.125253 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.125264 | orchestrator | 2026-04-06 05:18:36.125275 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:18:36.125286 | orchestrator | Monday 06 April 2026 05:18:34 +0000 (0:00:00.232) 0:11:04.654 ********** 2026-04-06 05:18:36.125296 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.125307 | orchestrator | 2026-04-06 05:18:36.125318 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:18:36.125329 | orchestrator | Monday 06 April 2026 05:18:35 +0000 (0:00:00.135) 0:11:04.789 ********** 2026-04-06 05:18:36.125339 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.125350 | orchestrator | 2026-04-06 05:18:36.125361 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:18:36.125372 | orchestrator | Monday 06 April 2026 05:18:35 +0000 (0:00:00.466) 0:11:05.256 ********** 2026-04-06 05:18:36.125382 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.125393 | orchestrator | 2026-04-06 05:18:36.125409 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:18:36.125428 | orchestrator | Monday 06 April 2026 05:18:35 +0000 (0:00:00.127) 0:11:05.383 ********** 2026-04-06 05:18:36.125448 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.125463 | orchestrator | 2026-04-06 05:18:36.125474 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:18:36.125485 | orchestrator | Monday 06 April 2026 05:18:35 +0000 (0:00:00.131) 0:11:05.515 ********** 2026-04-06 05:18:36.125496 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.125507 | orchestrator | 2026-04-06 05:18:36.125517 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:18:36.125528 | orchestrator | Monday 06 April 2026 05:18:35 +0000 (0:00:00.172) 0:11:05.687 ********** 2026-04-06 05:18:36.125539 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.125550 | orchestrator | 2026-04-06 05:18:36.125561 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:18:36.125580 | orchestrator | Monday 06 April 2026 05:18:36 +0000 (0:00:00.149) 0:11:05.837 ********** 2026-04-06 05:18:36.600704 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.600805 | orchestrator | 2026-04-06 05:18:36.600822 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:18:36.600835 | orchestrator | Monday 06 April 2026 05:18:36 +0000 (0:00:00.144) 0:11:05.981 ********** 2026-04-06 05:18:36.600865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:18:36.600881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:18:36.600893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:18:36.600927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:18:36.600942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:18:36.600954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:18:36.600965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:18:36.601102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a48c2299', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:18:36.601133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:18:36.601149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:18:36.601162 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:36.601175 | orchestrator | 2026-04-06 05:18:36.601188 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:18:36.601201 | orchestrator | Monday 06 April 2026 05:18:36 +0000 (0:00:00.244) 0:11:06.226 ********** 2026-04-06 05:18:36.601215 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:36.601229 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:36.601252 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:39.564795 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:39.564944 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:39.564964 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:39.564976 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:39.565047 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a48c2299', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_a48c2299-66c1-490a-8d0b-fe346fc666cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:39.565083 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:39.565096 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:18:39.565108 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:39.565122 | orchestrator | 2026-04-06 05:18:39.565134 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:18:39.565147 | orchestrator | Monday 06 April 2026 05:18:36 +0000 (0:00:00.249) 0:11:06.475 ********** 2026-04-06 05:18:39.565158 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:39.565170 | orchestrator | 2026-04-06 05:18:39.565181 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:18:39.565192 | orchestrator | Monday 06 April 2026 05:18:37 +0000 (0:00:00.509) 0:11:06.985 ********** 2026-04-06 05:18:39.565203 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:39.565214 | orchestrator | 2026-04-06 05:18:39.565226 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:18:39.565237 | orchestrator | Monday 06 April 2026 05:18:37 +0000 (0:00:00.153) 0:11:07.139 ********** 2026-04-06 05:18:39.565248 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:39.565258 | orchestrator | 2026-04-06 05:18:39.565269 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:18:39.565280 | orchestrator | Monday 06 April 2026 05:18:37 +0000 (0:00:00.489) 0:11:07.629 ********** 2026-04-06 05:18:39.565291 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:39.565302 | orchestrator | 2026-04-06 05:18:39.565313 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:18:39.565326 | orchestrator | Monday 06 April 2026 05:18:38 +0000 (0:00:00.131) 0:11:07.760 ********** 2026-04-06 05:18:39.565339 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:39.565353 | orchestrator | 2026-04-06 05:18:39.565365 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:18:39.565378 | orchestrator | Monday 06 April 2026 05:18:38 +0000 (0:00:00.227) 0:11:07.988 ********** 2026-04-06 05:18:39.565390 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:39.565402 | orchestrator | 2026-04-06 05:18:39.565415 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:18:39.565429 | orchestrator | Monday 06 April 2026 05:18:38 +0000 (0:00:00.437) 0:11:08.426 ********** 2026-04-06 05:18:39.565441 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-06 05:18:39.565461 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:18:39.565474 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-06 05:18:39.565486 | orchestrator | 2026-04-06 05:18:39.565499 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:18:39.565512 | orchestrator | Monday 06 April 2026 05:18:39 +0000 (0:00:00.675) 0:11:09.101 ********** 2026-04-06 05:18:39.565525 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-06 05:18:39.565537 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-06 05:18:39.565551 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-06 05:18:39.565564 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:39.565576 | orchestrator | 2026-04-06 05:18:39.565596 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:18:49.493868 | orchestrator | Monday 06 April 2026 05:18:39 +0000 (0:00:00.180) 0:11:09.282 ********** 2026-04-06 05:18:49.494088 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.494121 | orchestrator | 2026-04-06 05:18:49.494162 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:18:49.494181 | orchestrator | Monday 06 April 2026 05:18:39 +0000 (0:00:00.136) 0:11:09.418 ********** 2026-04-06 05:18:49.494198 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:18:49.494217 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:18:49.494235 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:18:49.494251 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:18:49.494268 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:18:49.494284 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:18:49.494300 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:18:49.494316 | orchestrator | 2026-04-06 05:18:49.494366 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:18:49.494383 | orchestrator | Monday 06 April 2026 05:18:40 +0000 (0:00:00.837) 0:11:10.256 ********** 2026-04-06 05:18:49.494400 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:18:49.494417 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:18:49.494435 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:18:49.494452 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:18:49.494470 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:18:49.494486 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:18:49.494503 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:18:49.494519 | orchestrator | 2026-04-06 05:18:49.494536 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:18:49.494553 | orchestrator | Monday 06 April 2026 05:18:42 +0000 (0:00:01.693) 0:11:11.949 ********** 2026-04-06 05:18:49.494570 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-06 05:18:49.494587 | orchestrator | 2026-04-06 05:18:49.494604 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:18:49.494622 | orchestrator | Monday 06 April 2026 05:18:42 +0000 (0:00:00.221) 0:11:12.171 ********** 2026-04-06 05:18:49.494639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-06 05:18:49.494657 | orchestrator | 2026-04-06 05:18:49.494674 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:18:49.494721 | orchestrator | Monday 06 April 2026 05:18:42 +0000 (0:00:00.208) 0:11:12.379 ********** 2026-04-06 05:18:49.494740 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:49.494759 | orchestrator | 2026-04-06 05:18:49.494775 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:18:49.494792 | orchestrator | Monday 06 April 2026 05:18:43 +0000 (0:00:00.530) 0:11:12.910 ********** 2026-04-06 05:18:49.494808 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.494825 | orchestrator | 2026-04-06 05:18:49.494841 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:18:49.494858 | orchestrator | Monday 06 April 2026 05:18:43 +0000 (0:00:00.143) 0:11:13.053 ********** 2026-04-06 05:18:49.494874 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.494891 | orchestrator | 2026-04-06 05:18:49.494907 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:18:49.494923 | orchestrator | Monday 06 April 2026 05:18:43 +0000 (0:00:00.411) 0:11:13.464 ********** 2026-04-06 05:18:49.494940 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.494956 | orchestrator | 2026-04-06 05:18:49.494972 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:18:49.495026 | orchestrator | Monday 06 April 2026 05:18:43 +0000 (0:00:00.148) 0:11:13.612 ********** 2026-04-06 05:18:49.495045 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:49.495060 | orchestrator | 2026-04-06 05:18:49.495077 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:18:49.495093 | orchestrator | Monday 06 April 2026 05:18:44 +0000 (0:00:00.548) 0:11:14.161 ********** 2026-04-06 05:18:49.495111 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.495127 | orchestrator | 2026-04-06 05:18:49.495143 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:18:49.495159 | orchestrator | Monday 06 April 2026 05:18:44 +0000 (0:00:00.137) 0:11:14.298 ********** 2026-04-06 05:18:49.495174 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.495190 | orchestrator | 2026-04-06 05:18:49.495206 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:18:49.495223 | orchestrator | Monday 06 April 2026 05:18:44 +0000 (0:00:00.145) 0:11:14.443 ********** 2026-04-06 05:18:49.495239 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:49.495255 | orchestrator | 2026-04-06 05:18:49.495271 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:18:49.495286 | orchestrator | Monday 06 April 2026 05:18:45 +0000 (0:00:00.563) 0:11:15.007 ********** 2026-04-06 05:18:49.495302 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:49.495318 | orchestrator | 2026-04-06 05:18:49.495335 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:18:49.495374 | orchestrator | Monday 06 April 2026 05:18:45 +0000 (0:00:00.565) 0:11:15.573 ********** 2026-04-06 05:18:49.495390 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.495406 | orchestrator | 2026-04-06 05:18:49.495423 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:18:49.495440 | orchestrator | Monday 06 April 2026 05:18:45 +0000 (0:00:00.136) 0:11:15.709 ********** 2026-04-06 05:18:49.495457 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:49.495473 | orchestrator | 2026-04-06 05:18:49.495489 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:18:49.495505 | orchestrator | Monday 06 April 2026 05:18:46 +0000 (0:00:00.151) 0:11:15.861 ********** 2026-04-06 05:18:49.495520 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.495537 | orchestrator | 2026-04-06 05:18:49.495553 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:18:49.495569 | orchestrator | Monday 06 April 2026 05:18:46 +0000 (0:00:00.137) 0:11:15.998 ********** 2026-04-06 05:18:49.495585 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.495601 | orchestrator | 2026-04-06 05:18:49.495617 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:18:49.495644 | orchestrator | Monday 06 April 2026 05:18:46 +0000 (0:00:00.130) 0:11:16.129 ********** 2026-04-06 05:18:49.495662 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.495679 | orchestrator | 2026-04-06 05:18:49.495694 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:18:49.495711 | orchestrator | Monday 06 April 2026 05:18:46 +0000 (0:00:00.135) 0:11:16.264 ********** 2026-04-06 05:18:49.495727 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.495742 | orchestrator | 2026-04-06 05:18:49.495759 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:18:49.495776 | orchestrator | Monday 06 April 2026 05:18:46 +0000 (0:00:00.125) 0:11:16.389 ********** 2026-04-06 05:18:49.495792 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.495814 | orchestrator | 2026-04-06 05:18:49.495831 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:18:49.495847 | orchestrator | Monday 06 April 2026 05:18:46 +0000 (0:00:00.189) 0:11:16.579 ********** 2026-04-06 05:18:49.495863 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:49.495879 | orchestrator | 2026-04-06 05:18:49.495896 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:18:49.495912 | orchestrator | Monday 06 April 2026 05:18:47 +0000 (0:00:00.491) 0:11:17.071 ********** 2026-04-06 05:18:49.495928 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:49.495944 | orchestrator | 2026-04-06 05:18:49.495960 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:18:49.495976 | orchestrator | Monday 06 April 2026 05:18:47 +0000 (0:00:00.154) 0:11:17.226 ********** 2026-04-06 05:18:49.496017 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:18:49.496034 | orchestrator | 2026-04-06 05:18:49.496050 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:18:49.496065 | orchestrator | Monday 06 April 2026 05:18:47 +0000 (0:00:00.220) 0:11:17.446 ********** 2026-04-06 05:18:49.496081 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496097 | orchestrator | 2026-04-06 05:18:49.496113 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:18:49.496129 | orchestrator | Monday 06 April 2026 05:18:47 +0000 (0:00:00.152) 0:11:17.598 ********** 2026-04-06 05:18:49.496145 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496161 | orchestrator | 2026-04-06 05:18:49.496175 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:18:49.496185 | orchestrator | Monday 06 April 2026 05:18:48 +0000 (0:00:00.129) 0:11:17.728 ********** 2026-04-06 05:18:49.496194 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496204 | orchestrator | 2026-04-06 05:18:49.496213 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:18:49.496223 | orchestrator | Monday 06 April 2026 05:18:48 +0000 (0:00:00.121) 0:11:17.849 ********** 2026-04-06 05:18:49.496233 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496242 | orchestrator | 2026-04-06 05:18:49.496252 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:18:49.496261 | orchestrator | Monday 06 April 2026 05:18:48 +0000 (0:00:00.131) 0:11:17.981 ********** 2026-04-06 05:18:49.496271 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496281 | orchestrator | 2026-04-06 05:18:49.496290 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:18:49.496300 | orchestrator | Monday 06 April 2026 05:18:48 +0000 (0:00:00.122) 0:11:18.104 ********** 2026-04-06 05:18:49.496309 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496319 | orchestrator | 2026-04-06 05:18:49.496329 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:18:49.496338 | orchestrator | Monday 06 April 2026 05:18:48 +0000 (0:00:00.133) 0:11:18.237 ********** 2026-04-06 05:18:49.496348 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496357 | orchestrator | 2026-04-06 05:18:49.496367 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:18:49.496385 | orchestrator | Monday 06 April 2026 05:18:48 +0000 (0:00:00.130) 0:11:18.368 ********** 2026-04-06 05:18:49.496395 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496404 | orchestrator | 2026-04-06 05:18:49.496414 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:18:49.496424 | orchestrator | Monday 06 April 2026 05:18:48 +0000 (0:00:00.141) 0:11:18.509 ********** 2026-04-06 05:18:49.496434 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496443 | orchestrator | 2026-04-06 05:18:49.496490 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:18:49.496501 | orchestrator | Monday 06 April 2026 05:18:48 +0000 (0:00:00.129) 0:11:18.638 ********** 2026-04-06 05:18:49.496511 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496520 | orchestrator | 2026-04-06 05:18:49.496530 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:18:49.496539 | orchestrator | Monday 06 April 2026 05:18:49 +0000 (0:00:00.437) 0:11:19.076 ********** 2026-04-06 05:18:49.496549 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:18:49.496559 | orchestrator | 2026-04-06 05:18:49.496577 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:19:06.749165 | orchestrator | Monday 06 April 2026 05:18:49 +0000 (0:00:00.130) 0:11:19.207 ********** 2026-04-06 05:19:06.749284 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.749301 | orchestrator | 2026-04-06 05:19:06.749314 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:19:06.749326 | orchestrator | Monday 06 April 2026 05:18:49 +0000 (0:00:00.212) 0:11:19.419 ********** 2026-04-06 05:19:06.749338 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:06.749350 | orchestrator | 2026-04-06 05:19:06.749361 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:19:06.749372 | orchestrator | Monday 06 April 2026 05:18:50 +0000 (0:00:00.899) 0:11:20.319 ********** 2026-04-06 05:19:06.749383 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:06.749394 | orchestrator | 2026-04-06 05:19:06.749405 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:19:06.749416 | orchestrator | Monday 06 April 2026 05:18:51 +0000 (0:00:01.392) 0:11:21.712 ********** 2026-04-06 05:19:06.749427 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-06 05:19:06.749439 | orchestrator | 2026-04-06 05:19:06.749450 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:19:06.749461 | orchestrator | Monday 06 April 2026 05:18:52 +0000 (0:00:00.211) 0:11:21.924 ********** 2026-04-06 05:19:06.749473 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.749484 | orchestrator | 2026-04-06 05:19:06.749495 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:19:06.749506 | orchestrator | Monday 06 April 2026 05:18:52 +0000 (0:00:00.144) 0:11:22.068 ********** 2026-04-06 05:19:06.749517 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.749528 | orchestrator | 2026-04-06 05:19:06.749539 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:19:06.749550 | orchestrator | Monday 06 April 2026 05:18:52 +0000 (0:00:00.150) 0:11:22.218 ********** 2026-04-06 05:19:06.749561 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:19:06.749572 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:19:06.749584 | orchestrator | 2026-04-06 05:19:06.749594 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:19:06.749605 | orchestrator | Monday 06 April 2026 05:18:53 +0000 (0:00:00.869) 0:11:23.088 ********** 2026-04-06 05:19:06.749616 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:06.749627 | orchestrator | 2026-04-06 05:19:06.749641 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:19:06.749654 | orchestrator | Monday 06 April 2026 05:18:53 +0000 (0:00:00.482) 0:11:23.571 ********** 2026-04-06 05:19:06.749691 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.749704 | orchestrator | 2026-04-06 05:19:06.749717 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:19:06.749730 | orchestrator | Monday 06 April 2026 05:18:54 +0000 (0:00:00.148) 0:11:23.720 ********** 2026-04-06 05:19:06.749743 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.749755 | orchestrator | 2026-04-06 05:19:06.749768 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:19:06.749782 | orchestrator | Monday 06 April 2026 05:18:54 +0000 (0:00:00.421) 0:11:24.141 ********** 2026-04-06 05:19:06.749795 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.749807 | orchestrator | 2026-04-06 05:19:06.749820 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:19:06.749831 | orchestrator | Monday 06 April 2026 05:18:54 +0000 (0:00:00.125) 0:11:24.267 ********** 2026-04-06 05:19:06.749842 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-06 05:19:06.749853 | orchestrator | 2026-04-06 05:19:06.749863 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:19:06.749874 | orchestrator | Monday 06 April 2026 05:18:54 +0000 (0:00:00.277) 0:11:24.544 ********** 2026-04-06 05:19:06.749885 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:06.749896 | orchestrator | 2026-04-06 05:19:06.749907 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:19:06.749917 | orchestrator | Monday 06 April 2026 05:18:55 +0000 (0:00:00.720) 0:11:25.265 ********** 2026-04-06 05:19:06.749928 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:19:06.749939 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:19:06.749950 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:19:06.749960 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.749971 | orchestrator | 2026-04-06 05:19:06.749982 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:19:06.750069 | orchestrator | Monday 06 April 2026 05:18:55 +0000 (0:00:00.162) 0:11:25.428 ********** 2026-04-06 05:19:06.750084 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750096 | orchestrator | 2026-04-06 05:19:06.750107 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:19:06.750117 | orchestrator | Monday 06 April 2026 05:18:55 +0000 (0:00:00.151) 0:11:25.579 ********** 2026-04-06 05:19:06.750128 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750139 | orchestrator | 2026-04-06 05:19:06.750150 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:19:06.750161 | orchestrator | Monday 06 April 2026 05:18:56 +0000 (0:00:00.183) 0:11:25.763 ********** 2026-04-06 05:19:06.750172 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750183 | orchestrator | 2026-04-06 05:19:06.750194 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:19:06.750204 | orchestrator | Monday 06 April 2026 05:18:56 +0000 (0:00:00.165) 0:11:25.928 ********** 2026-04-06 05:19:06.750215 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750226 | orchestrator | 2026-04-06 05:19:06.750256 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:19:06.750275 | orchestrator | Monday 06 April 2026 05:18:56 +0000 (0:00:00.144) 0:11:26.073 ********** 2026-04-06 05:19:06.750287 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750297 | orchestrator | 2026-04-06 05:19:06.750308 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:19:06.750319 | orchestrator | Monday 06 April 2026 05:18:56 +0000 (0:00:00.150) 0:11:26.223 ********** 2026-04-06 05:19:06.750330 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:06.750340 | orchestrator | 2026-04-06 05:19:06.750351 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:19:06.750371 | orchestrator | Monday 06 April 2026 05:18:57 +0000 (0:00:01.465) 0:11:27.689 ********** 2026-04-06 05:19:06.750382 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:06.750393 | orchestrator | 2026-04-06 05:19:06.750404 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:19:06.750414 | orchestrator | Monday 06 April 2026 05:18:58 +0000 (0:00:00.147) 0:11:27.836 ********** 2026-04-06 05:19:06.750425 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-06 05:19:06.750436 | orchestrator | 2026-04-06 05:19:06.750446 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:19:06.750457 | orchestrator | Monday 06 April 2026 05:18:58 +0000 (0:00:00.515) 0:11:28.351 ********** 2026-04-06 05:19:06.750468 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750479 | orchestrator | 2026-04-06 05:19:06.750489 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:19:06.750500 | orchestrator | Monday 06 April 2026 05:18:58 +0000 (0:00:00.167) 0:11:28.518 ********** 2026-04-06 05:19:06.750511 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750521 | orchestrator | 2026-04-06 05:19:06.750532 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:19:06.750543 | orchestrator | Monday 06 April 2026 05:18:58 +0000 (0:00:00.166) 0:11:28.685 ********** 2026-04-06 05:19:06.750554 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750564 | orchestrator | 2026-04-06 05:19:06.750575 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:19:06.750636 | orchestrator | Monday 06 April 2026 05:18:59 +0000 (0:00:00.155) 0:11:28.840 ********** 2026-04-06 05:19:06.750650 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750661 | orchestrator | 2026-04-06 05:19:06.750671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:19:06.750682 | orchestrator | Monday 06 April 2026 05:18:59 +0000 (0:00:00.164) 0:11:29.005 ********** 2026-04-06 05:19:06.750693 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750704 | orchestrator | 2026-04-06 05:19:06.750714 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:19:06.750725 | orchestrator | Monday 06 April 2026 05:18:59 +0000 (0:00:00.148) 0:11:29.153 ********** 2026-04-06 05:19:06.750736 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750747 | orchestrator | 2026-04-06 05:19:06.750757 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:19:06.750768 | orchestrator | Monday 06 April 2026 05:18:59 +0000 (0:00:00.161) 0:11:29.315 ********** 2026-04-06 05:19:06.750779 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750790 | orchestrator | 2026-04-06 05:19:06.750800 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:19:06.750811 | orchestrator | Monday 06 April 2026 05:18:59 +0000 (0:00:00.185) 0:11:29.500 ********** 2026-04-06 05:19:06.750822 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:06.750833 | orchestrator | 2026-04-06 05:19:06.750843 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:19:06.750859 | orchestrator | Monday 06 April 2026 05:18:59 +0000 (0:00:00.137) 0:11:29.638 ********** 2026-04-06 05:19:06.750877 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:06.750895 | orchestrator | 2026-04-06 05:19:06.750913 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:19:06.750932 | orchestrator | Monday 06 April 2026 05:19:00 +0000 (0:00:00.232) 0:11:29.871 ********** 2026-04-06 05:19:06.750949 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-06 05:19:06.750967 | orchestrator | 2026-04-06 05:19:06.750979 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:19:06.750990 | orchestrator | Monday 06 April 2026 05:19:00 +0000 (0:00:00.214) 0:11:30.085 ********** 2026-04-06 05:19:06.751068 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-06 05:19:06.751089 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-06 05:19:06.751108 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-06 05:19:06.751128 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-06 05:19:06.751147 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-06 05:19:06.751160 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-06 05:19:06.751171 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-06 05:19:06.751181 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:19:06.751192 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:19:06.751203 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:19:06.751214 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:19:06.751225 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:19:06.751235 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:19:06.751246 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:19:06.751257 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-06 05:19:06.751268 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-06 05:19:06.751279 | orchestrator | 2026-04-06 05:19:06.751299 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:19:24.480451 | orchestrator | Monday 06 April 2026 05:19:06 +0000 (0:00:06.368) 0:11:36.453 ********** 2026-04-06 05:19:24.480617 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.480648 | orchestrator | 2026-04-06 05:19:24.480670 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:19:24.480749 | orchestrator | Monday 06 April 2026 05:19:06 +0000 (0:00:00.134) 0:11:36.588 ********** 2026-04-06 05:19:24.480773 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.480792 | orchestrator | 2026-04-06 05:19:24.480811 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:19:24.480830 | orchestrator | Monday 06 April 2026 05:19:07 +0000 (0:00:00.135) 0:11:36.723 ********** 2026-04-06 05:19:24.480848 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.480867 | orchestrator | 2026-04-06 05:19:24.480886 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:19:24.480904 | orchestrator | Monday 06 April 2026 05:19:07 +0000 (0:00:00.140) 0:11:36.864 ********** 2026-04-06 05:19:24.480922 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.480939 | orchestrator | 2026-04-06 05:19:24.480957 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:19:24.480976 | orchestrator | Monday 06 April 2026 05:19:07 +0000 (0:00:00.134) 0:11:36.999 ********** 2026-04-06 05:19:24.480994 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481044 | orchestrator | 2026-04-06 05:19:24.481063 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:19:24.481082 | orchestrator | Monday 06 April 2026 05:19:07 +0000 (0:00:00.136) 0:11:37.135 ********** 2026-04-06 05:19:24.481101 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481119 | orchestrator | 2026-04-06 05:19:24.481138 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:19:24.481160 | orchestrator | Monday 06 April 2026 05:19:07 +0000 (0:00:00.135) 0:11:37.271 ********** 2026-04-06 05:19:24.481180 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481199 | orchestrator | 2026-04-06 05:19:24.481218 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:19:24.481238 | orchestrator | Monday 06 April 2026 05:19:07 +0000 (0:00:00.143) 0:11:37.414 ********** 2026-04-06 05:19:24.481259 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481278 | orchestrator | 2026-04-06 05:19:24.481335 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:19:24.481355 | orchestrator | Monday 06 April 2026 05:19:07 +0000 (0:00:00.132) 0:11:37.547 ********** 2026-04-06 05:19:24.481368 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481380 | orchestrator | 2026-04-06 05:19:24.481391 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:19:24.481402 | orchestrator | Monday 06 April 2026 05:19:07 +0000 (0:00:00.138) 0:11:37.686 ********** 2026-04-06 05:19:24.481412 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481423 | orchestrator | 2026-04-06 05:19:24.481434 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:19:24.481445 | orchestrator | Monday 06 April 2026 05:19:08 +0000 (0:00:00.140) 0:11:37.826 ********** 2026-04-06 05:19:24.481455 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481466 | orchestrator | 2026-04-06 05:19:24.481477 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:19:24.481488 | orchestrator | Monday 06 April 2026 05:19:08 +0000 (0:00:00.141) 0:11:37.967 ********** 2026-04-06 05:19:24.481499 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481509 | orchestrator | 2026-04-06 05:19:24.481527 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:19:24.481538 | orchestrator | Monday 06 April 2026 05:19:08 +0000 (0:00:00.131) 0:11:38.099 ********** 2026-04-06 05:19:24.481548 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481559 | orchestrator | 2026-04-06 05:19:24.481570 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:19:24.481581 | orchestrator | Monday 06 April 2026 05:19:09 +0000 (0:00:00.930) 0:11:39.029 ********** 2026-04-06 05:19:24.481592 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481602 | orchestrator | 2026-04-06 05:19:24.481613 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:19:24.481623 | orchestrator | Monday 06 April 2026 05:19:09 +0000 (0:00:00.163) 0:11:39.193 ********** 2026-04-06 05:19:24.481634 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481645 | orchestrator | 2026-04-06 05:19:24.481656 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:19:24.481667 | orchestrator | Monday 06 April 2026 05:19:09 +0000 (0:00:00.247) 0:11:39.441 ********** 2026-04-06 05:19:24.481677 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481688 | orchestrator | 2026-04-06 05:19:24.481698 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:19:24.481709 | orchestrator | Monday 06 April 2026 05:19:09 +0000 (0:00:00.133) 0:11:39.574 ********** 2026-04-06 05:19:24.481720 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481731 | orchestrator | 2026-04-06 05:19:24.481742 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:19:24.481754 | orchestrator | Monday 06 April 2026 05:19:09 +0000 (0:00:00.133) 0:11:39.707 ********** 2026-04-06 05:19:24.481765 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481776 | orchestrator | 2026-04-06 05:19:24.481787 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:19:24.481797 | orchestrator | Monday 06 April 2026 05:19:10 +0000 (0:00:00.146) 0:11:39.854 ********** 2026-04-06 05:19:24.481808 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481819 | orchestrator | 2026-04-06 05:19:24.481829 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:19:24.481840 | orchestrator | Monday 06 April 2026 05:19:10 +0000 (0:00:00.135) 0:11:39.990 ********** 2026-04-06 05:19:24.481851 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481862 | orchestrator | 2026-04-06 05:19:24.481903 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:19:24.481915 | orchestrator | Monday 06 April 2026 05:19:10 +0000 (0:00:00.123) 0:11:40.114 ********** 2026-04-06 05:19:24.481936 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.481947 | orchestrator | 2026-04-06 05:19:24.481958 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:19:24.481968 | orchestrator | Monday 06 April 2026 05:19:10 +0000 (0:00:00.135) 0:11:40.249 ********** 2026-04-06 05:19:24.481979 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-06 05:19:24.481990 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-06 05:19:24.482091 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-06 05:19:24.482108 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.482119 | orchestrator | 2026-04-06 05:19:24.482130 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:19:24.482141 | orchestrator | Monday 06 April 2026 05:19:10 +0000 (0:00:00.385) 0:11:40.634 ********** 2026-04-06 05:19:24.482152 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-06 05:19:24.482163 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-06 05:19:24.482173 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-06 05:19:24.482184 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.482195 | orchestrator | 2026-04-06 05:19:24.482206 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:19:24.482216 | orchestrator | Monday 06 April 2026 05:19:11 +0000 (0:00:00.418) 0:11:41.053 ********** 2026-04-06 05:19:24.482227 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-06 05:19:24.482238 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-06 05:19:24.482249 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-06 05:19:24.482260 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.482270 | orchestrator | 2026-04-06 05:19:24.482281 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:19:24.482292 | orchestrator | Monday 06 April 2026 05:19:12 +0000 (0:00:00.786) 0:11:41.839 ********** 2026-04-06 05:19:24.482303 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.482314 | orchestrator | 2026-04-06 05:19:24.482325 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:19:24.482336 | orchestrator | Monday 06 April 2026 05:19:12 +0000 (0:00:00.151) 0:11:41.991 ********** 2026-04-06 05:19:24.482347 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-06 05:19:24.482358 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.482368 | orchestrator | 2026-04-06 05:19:24.482379 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:19:24.482390 | orchestrator | Monday 06 April 2026 05:19:13 +0000 (0:00:01.011) 0:11:43.002 ********** 2026-04-06 05:19:24.482401 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:24.482412 | orchestrator | 2026-04-06 05:19:24.482717 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:19:24.482731 | orchestrator | Monday 06 April 2026 05:19:14 +0000 (0:00:00.864) 0:11:43.867 ********** 2026-04-06 05:19:24.482743 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:19:24.482755 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-06 05:19:24.482766 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:19:24.482777 | orchestrator | 2026-04-06 05:19:24.482788 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-06 05:19:24.482799 | orchestrator | Monday 06 April 2026 05:19:14 +0000 (0:00:00.691) 0:11:44.559 ********** 2026-04-06 05:19:24.482809 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-04-06 05:19:24.482820 | orchestrator | 2026-04-06 05:19:24.482831 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-06 05:19:24.482842 | orchestrator | Monday 06 April 2026 05:19:15 +0000 (0:00:00.212) 0:11:44.771 ********** 2026-04-06 05:19:24.482863 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:24.482875 | orchestrator | 2026-04-06 05:19:24.482886 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-06 05:19:24.482896 | orchestrator | Monday 06 April 2026 05:19:15 +0000 (0:00:00.507) 0:11:45.278 ********** 2026-04-06 05:19:24.482907 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:24.482918 | orchestrator | 2026-04-06 05:19:24.482929 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-06 05:19:24.482940 | orchestrator | Monday 06 April 2026 05:19:15 +0000 (0:00:00.136) 0:11:45.415 ********** 2026-04-06 05:19:24.482951 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:19:24.482962 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:19:24.482973 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:19:24.482984 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-04-06 05:19:24.482994 | orchestrator | 2026-04-06 05:19:24.483063 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-06 05:19:24.483074 | orchestrator | Monday 06 April 2026 05:19:22 +0000 (0:00:06.420) 0:11:51.835 ********** 2026-04-06 05:19:24.483085 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:24.483096 | orchestrator | 2026-04-06 05:19:24.483106 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-06 05:19:24.483117 | orchestrator | Monday 06 April 2026 05:19:22 +0000 (0:00:00.165) 0:11:52.000 ********** 2026-04-06 05:19:24.483128 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-06 05:19:24.483139 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-06 05:19:24.483150 | orchestrator | 2026-04-06 05:19:24.483174 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:19:44.219517 | orchestrator | Monday 06 April 2026 05:19:24 +0000 (0:00:02.188) 0:11:54.189 ********** 2026-04-06 05:19:44.219620 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-06 05:19:44.219631 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-06 05:19:44.219640 | orchestrator | 2026-04-06 05:19:44.219648 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-06 05:19:44.219656 | orchestrator | Monday 06 April 2026 05:19:25 +0000 (0:00:01.040) 0:11:55.229 ********** 2026-04-06 05:19:44.219662 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:44.219670 | orchestrator | 2026-04-06 05:19:44.219677 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-06 05:19:44.219683 | orchestrator | Monday 06 April 2026 05:19:26 +0000 (0:00:00.781) 0:11:56.011 ********** 2026-04-06 05:19:44.219691 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:44.219698 | orchestrator | 2026-04-06 05:19:44.219705 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-06 05:19:44.219712 | orchestrator | Monday 06 April 2026 05:19:26 +0000 (0:00:00.137) 0:11:56.149 ********** 2026-04-06 05:19:44.219719 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:44.219726 | orchestrator | 2026-04-06 05:19:44.219733 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-06 05:19:44.219739 | orchestrator | Monday 06 April 2026 05:19:26 +0000 (0:00:00.120) 0:11:56.269 ********** 2026-04-06 05:19:44.219747 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-04-06 05:19:44.219755 | orchestrator | 2026-04-06 05:19:44.219762 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-06 05:19:44.219769 | orchestrator | Monday 06 April 2026 05:19:26 +0000 (0:00:00.210) 0:11:56.480 ********** 2026-04-06 05:19:44.219776 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:44.219783 | orchestrator | 2026-04-06 05:19:44.219790 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-06 05:19:44.219797 | orchestrator | Monday 06 April 2026 05:19:26 +0000 (0:00:00.141) 0:11:56.621 ********** 2026-04-06 05:19:44.219825 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:44.219832 | orchestrator | 2026-04-06 05:19:44.219839 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-06 05:19:44.219846 | orchestrator | Monday 06 April 2026 05:19:27 +0000 (0:00:00.160) 0:11:56.782 ********** 2026-04-06 05:19:44.219853 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-04-06 05:19:44.219860 | orchestrator | 2026-04-06 05:19:44.219867 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-06 05:19:44.219873 | orchestrator | Monday 06 April 2026 05:19:27 +0000 (0:00:00.204) 0:11:56.986 ********** 2026-04-06 05:19:44.219880 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:44.219887 | orchestrator | 2026-04-06 05:19:44.219893 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-06 05:19:44.219900 | orchestrator | Monday 06 April 2026 05:19:28 +0000 (0:00:01.131) 0:11:58.117 ********** 2026-04-06 05:19:44.219907 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:44.219914 | orchestrator | 2026-04-06 05:19:44.219921 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-06 05:19:44.219928 | orchestrator | Monday 06 April 2026 05:19:29 +0000 (0:00:00.990) 0:11:59.108 ********** 2026-04-06 05:19:44.219935 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:19:44.219941 | orchestrator | 2026-04-06 05:19:44.219948 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-06 05:19:44.219955 | orchestrator | Monday 06 April 2026 05:19:30 +0000 (0:00:01.336) 0:12:00.444 ********** 2026-04-06 05:19:44.219962 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:19:44.219969 | orchestrator | 2026-04-06 05:19:44.219975 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-06 05:19:44.219982 | orchestrator | Monday 06 April 2026 05:19:33 +0000 (0:00:02.840) 0:12:03.284 ********** 2026-04-06 05:19:44.219990 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:19:44.219997 | orchestrator | 2026-04-06 05:19:44.220003 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-06 05:19:44.220059 | orchestrator | 2026-04-06 05:19:44.220066 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-06 05:19:44.220073 | orchestrator | Monday 06 April 2026 05:19:34 +0000 (0:00:00.886) 0:12:04.171 ********** 2026-04-06 05:19:44.220080 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:19:44.220086 | orchestrator | 2026-04-06 05:19:44.220093 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-06 05:19:44.220100 | orchestrator | Monday 06 April 2026 05:19:36 +0000 (0:00:01.817) 0:12:05.988 ********** 2026-04-06 05:19:44.220106 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:19:44.220113 | orchestrator | 2026-04-06 05:19:44.220119 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:19:44.220126 | orchestrator | Monday 06 April 2026 05:19:37 +0000 (0:00:01.562) 0:12:07.551 ********** 2026-04-06 05:19:44.220132 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-06 05:19:44.220139 | orchestrator | 2026-04-06 05:19:44.220146 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:19:44.220153 | orchestrator | Monday 06 April 2026 05:19:38 +0000 (0:00:00.264) 0:12:07.815 ********** 2026-04-06 05:19:44.220159 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:44.220166 | orchestrator | 2026-04-06 05:19:44.220173 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:19:44.220180 | orchestrator | Monday 06 April 2026 05:19:38 +0000 (0:00:00.464) 0:12:08.280 ********** 2026-04-06 05:19:44.220186 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:44.220193 | orchestrator | 2026-04-06 05:19:44.220200 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:19:44.220207 | orchestrator | Monday 06 April 2026 05:19:38 +0000 (0:00:00.145) 0:12:08.426 ********** 2026-04-06 05:19:44.220213 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:44.220219 | orchestrator | 2026-04-06 05:19:44.220226 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:19:44.220257 | orchestrator | Monday 06 April 2026 05:19:39 +0000 (0:00:00.455) 0:12:08.881 ********** 2026-04-06 05:19:44.220272 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:44.220279 | orchestrator | 2026-04-06 05:19:44.220286 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:19:44.220292 | orchestrator | Monday 06 April 2026 05:19:39 +0000 (0:00:00.150) 0:12:09.032 ********** 2026-04-06 05:19:44.220299 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:44.220306 | orchestrator | 2026-04-06 05:19:44.220313 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:19:44.220319 | orchestrator | Monday 06 April 2026 05:19:39 +0000 (0:00:00.147) 0:12:09.179 ********** 2026-04-06 05:19:44.220326 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:44.220333 | orchestrator | 2026-04-06 05:19:44.220339 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:19:44.220346 | orchestrator | Monday 06 April 2026 05:19:39 +0000 (0:00:00.155) 0:12:09.335 ********** 2026-04-06 05:19:44.220353 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:44.220360 | orchestrator | 2026-04-06 05:19:44.220367 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:19:44.220373 | orchestrator | Monday 06 April 2026 05:19:40 +0000 (0:00:00.472) 0:12:09.808 ********** 2026-04-06 05:19:44.220380 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:44.220386 | orchestrator | 2026-04-06 05:19:44.220393 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:19:44.220400 | orchestrator | Monday 06 April 2026 05:19:40 +0000 (0:00:00.144) 0:12:09.952 ********** 2026-04-06 05:19:44.220406 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:19:44.220413 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:19:44.220419 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:19:44.220426 | orchestrator | 2026-04-06 05:19:44.220433 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:19:44.220440 | orchestrator | Monday 06 April 2026 05:19:40 +0000 (0:00:00.682) 0:12:10.634 ********** 2026-04-06 05:19:44.220446 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:44.220453 | orchestrator | 2026-04-06 05:19:44.220460 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:19:44.220467 | orchestrator | Monday 06 April 2026 05:19:41 +0000 (0:00:00.264) 0:12:10.899 ********** 2026-04-06 05:19:44.220474 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:19:44.220481 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:19:44.220488 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:19:44.220494 | orchestrator | 2026-04-06 05:19:44.220501 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:19:44.220507 | orchestrator | Monday 06 April 2026 05:19:43 +0000 (0:00:01.879) 0:12:12.778 ********** 2026-04-06 05:19:44.220514 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 05:19:44.220521 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 05:19:44.220528 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 05:19:44.220534 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:44.220541 | orchestrator | 2026-04-06 05:19:44.220548 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:19:44.220555 | orchestrator | Monday 06 April 2026 05:19:43 +0000 (0:00:00.436) 0:12:13.214 ********** 2026-04-06 05:19:44.220564 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:19:44.220579 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:19:44.220586 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:19:44.220593 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:44.220600 | orchestrator | 2026-04-06 05:19:44.220607 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:19:44.220614 | orchestrator | Monday 06 April 2026 05:19:44 +0000 (0:00:00.647) 0:12:13.862 ********** 2026-04-06 05:19:44.220623 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:19:44.220641 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:19:49.565353 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:19:49.565466 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.565484 | orchestrator | 2026-04-06 05:19:49.565497 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:19:49.565510 | orchestrator | Monday 06 April 2026 05:19:44 +0000 (0:00:00.179) 0:12:14.041 ********** 2026-04-06 05:19:49.565524 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:19:41.683934', 'end': '2026-04-06 05:19:41.741564', 'delta': '0:00:00.057630', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:19:49.565539 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:19:42.263245', 'end': '2026-04-06 05:19:42.313643', 'delta': '0:00:00.050398', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:19:49.565577 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:19:42.842060', 'end': '2026-04-06 05:19:42.892579', 'delta': '0:00:00.050519', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:19:49.565589 | orchestrator | 2026-04-06 05:19:49.565601 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:19:49.565612 | orchestrator | Monday 06 April 2026 05:19:44 +0000 (0:00:00.216) 0:12:14.257 ********** 2026-04-06 05:19:49.565623 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:49.565634 | orchestrator | 2026-04-06 05:19:49.565645 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:19:49.565656 | orchestrator | Monday 06 April 2026 05:19:44 +0000 (0:00:00.296) 0:12:14.554 ********** 2026-04-06 05:19:49.565667 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.565678 | orchestrator | 2026-04-06 05:19:49.565689 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:19:49.565699 | orchestrator | Monday 06 April 2026 05:19:45 +0000 (0:00:00.277) 0:12:14.832 ********** 2026-04-06 05:19:49.565710 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:49.565721 | orchestrator | 2026-04-06 05:19:49.565732 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:19:49.565743 | orchestrator | Monday 06 April 2026 05:19:45 +0000 (0:00:00.145) 0:12:14.977 ********** 2026-04-06 05:19:49.565753 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:19:49.565765 | orchestrator | 2026-04-06 05:19:49.565775 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:19:49.565786 | orchestrator | Monday 06 April 2026 05:19:47 +0000 (0:00:02.355) 0:12:17.333 ********** 2026-04-06 05:19:49.565797 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:19:49.565807 | orchestrator | 2026-04-06 05:19:49.565818 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:19:49.565843 | orchestrator | Monday 06 April 2026 05:19:48 +0000 (0:00:00.489) 0:12:17.823 ********** 2026-04-06 05:19:49.565873 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.565887 | orchestrator | 2026-04-06 05:19:49.565900 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:19:49.565913 | orchestrator | Monday 06 April 2026 05:19:48 +0000 (0:00:00.123) 0:12:17.946 ********** 2026-04-06 05:19:49.565925 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.565938 | orchestrator | 2026-04-06 05:19:49.565950 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:19:49.565962 | orchestrator | Monday 06 April 2026 05:19:48 +0000 (0:00:00.254) 0:12:18.200 ********** 2026-04-06 05:19:49.565974 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.565987 | orchestrator | 2026-04-06 05:19:49.565999 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:19:49.566114 | orchestrator | Monday 06 April 2026 05:19:48 +0000 (0:00:00.125) 0:12:18.325 ********** 2026-04-06 05:19:49.566140 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.566160 | orchestrator | 2026-04-06 05:19:49.566173 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:19:49.566186 | orchestrator | Monday 06 April 2026 05:19:48 +0000 (0:00:00.155) 0:12:18.481 ********** 2026-04-06 05:19:49.566199 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.566211 | orchestrator | 2026-04-06 05:19:49.566223 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:19:49.566247 | orchestrator | Monday 06 April 2026 05:19:48 +0000 (0:00:00.141) 0:12:18.623 ********** 2026-04-06 05:19:49.566258 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.566269 | orchestrator | 2026-04-06 05:19:49.566280 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:19:49.566291 | orchestrator | Monday 06 April 2026 05:19:49 +0000 (0:00:00.134) 0:12:18.757 ********** 2026-04-06 05:19:49.566302 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.566313 | orchestrator | 2026-04-06 05:19:49.566324 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:19:49.566335 | orchestrator | Monday 06 April 2026 05:19:49 +0000 (0:00:00.136) 0:12:18.893 ********** 2026-04-06 05:19:49.566345 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.566356 | orchestrator | 2026-04-06 05:19:49.566367 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:19:49.566378 | orchestrator | Monday 06 April 2026 05:19:49 +0000 (0:00:00.134) 0:12:19.028 ********** 2026-04-06 05:19:49.566389 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.566400 | orchestrator | 2026-04-06 05:19:49.566410 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:19:49.566421 | orchestrator | Monday 06 April 2026 05:19:49 +0000 (0:00:00.141) 0:12:19.170 ********** 2026-04-06 05:19:49.566433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:19:49.566445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:19:49.566456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:19:49.566469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:19:49.566488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:19:49.566509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:19:49.811537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:19:49.811652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a86fd0c9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:19:49.811674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:19:49.811686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:19:49.811698 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:19:49.811711 | orchestrator | 2026-04-06 05:19:49.811723 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:19:49.811734 | orchestrator | Monday 06 April 2026 05:19:49 +0000 (0:00:00.237) 0:12:19.407 ********** 2026-04-06 05:19:49.811774 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:19:49.811818 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:19:49.811831 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:19:49.811843 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:19:49.811855 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:19:49.811868 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:19:49.811884 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:19:49.811914 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a86fd0c9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1', 'scsi-SQEMU_QEMU_HARDDISK_a86fd0c9-311f-45be-821d-b1ac3da783a1-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:20:00.492241 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:20:00.492340 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:20:00.492374 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.492384 | orchestrator | 2026-04-06 05:20:00.492393 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:20:00.492414 | orchestrator | Monday 06 April 2026 05:19:49 +0000 (0:00:00.230) 0:12:19.638 ********** 2026-04-06 05:20:00.492422 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:00.492430 | orchestrator | 2026-04-06 05:20:00.492437 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:20:00.492444 | orchestrator | Monday 06 April 2026 05:19:50 +0000 (0:00:00.526) 0:12:20.165 ********** 2026-04-06 05:20:00.492451 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:00.492459 | orchestrator | 2026-04-06 05:20:00.492466 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:20:00.492473 | orchestrator | Monday 06 April 2026 05:19:50 +0000 (0:00:00.117) 0:12:20.282 ********** 2026-04-06 05:20:00.492480 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:00.492487 | orchestrator | 2026-04-06 05:20:00.492494 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:20:00.492501 | orchestrator | Monday 06 April 2026 05:19:51 +0000 (0:00:00.820) 0:12:21.102 ********** 2026-04-06 05:20:00.492509 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.492516 | orchestrator | 2026-04-06 05:20:00.492523 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:20:00.492530 | orchestrator | Monday 06 April 2026 05:19:51 +0000 (0:00:00.129) 0:12:21.232 ********** 2026-04-06 05:20:00.492537 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.492544 | orchestrator | 2026-04-06 05:20:00.492551 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:20:00.492558 | orchestrator | Monday 06 April 2026 05:19:51 +0000 (0:00:00.254) 0:12:21.487 ********** 2026-04-06 05:20:00.492565 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.492572 | orchestrator | 2026-04-06 05:20:00.492580 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:20:00.492587 | orchestrator | Monday 06 April 2026 05:19:51 +0000 (0:00:00.166) 0:12:21.654 ********** 2026-04-06 05:20:00.492594 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-06 05:20:00.492601 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-06 05:20:00.492608 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:20:00.492615 | orchestrator | 2026-04-06 05:20:00.492623 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:20:00.492630 | orchestrator | Monday 06 April 2026 05:19:52 +0000 (0:00:00.695) 0:12:22.349 ********** 2026-04-06 05:20:00.492637 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-06 05:20:00.492644 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-06 05:20:00.492651 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-06 05:20:00.492659 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.492666 | orchestrator | 2026-04-06 05:20:00.492673 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:20:00.492680 | orchestrator | Monday 06 April 2026 05:19:52 +0000 (0:00:00.169) 0:12:22.519 ********** 2026-04-06 05:20:00.492687 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.492694 | orchestrator | 2026-04-06 05:20:00.492701 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:20:00.492709 | orchestrator | Monday 06 April 2026 05:19:52 +0000 (0:00:00.138) 0:12:22.657 ********** 2026-04-06 05:20:00.492716 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:20:00.492723 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:20:00.492731 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:20:00.492738 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:20:00.492752 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:20:00.492759 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:20:00.492780 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:20:00.492790 | orchestrator | 2026-04-06 05:20:00.492798 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:20:00.492806 | orchestrator | Monday 06 April 2026 05:19:54 +0000 (0:00:01.149) 0:12:23.807 ********** 2026-04-06 05:20:00.492815 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:20:00.492824 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:20:00.492832 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:20:00.492841 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:20:00.492850 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:20:00.492858 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:20:00.492866 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:20:00.492874 | orchestrator | 2026-04-06 05:20:00.492883 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:20:00.492891 | orchestrator | Monday 06 April 2026 05:19:55 +0000 (0:00:01.720) 0:12:25.528 ********** 2026-04-06 05:20:00.492900 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-06 05:20:00.492908 | orchestrator | 2026-04-06 05:20:00.492917 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:20:00.492926 | orchestrator | Monday 06 April 2026 05:19:56 +0000 (0:00:00.229) 0:12:25.757 ********** 2026-04-06 05:20:00.492935 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-06 05:20:00.492943 | orchestrator | 2026-04-06 05:20:00.492956 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:20:00.492965 | orchestrator | Monday 06 April 2026 05:19:56 +0000 (0:00:00.512) 0:12:26.270 ********** 2026-04-06 05:20:00.492973 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:00.492982 | orchestrator | 2026-04-06 05:20:00.492991 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:20:00.492999 | orchestrator | Monday 06 April 2026 05:19:57 +0000 (0:00:00.534) 0:12:26.805 ********** 2026-04-06 05:20:00.493008 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.493033 | orchestrator | 2026-04-06 05:20:00.493041 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:20:00.493048 | orchestrator | Monday 06 April 2026 05:19:57 +0000 (0:00:00.138) 0:12:26.943 ********** 2026-04-06 05:20:00.493056 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.493063 | orchestrator | 2026-04-06 05:20:00.493070 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:20:00.493077 | orchestrator | Monday 06 April 2026 05:19:57 +0000 (0:00:00.152) 0:12:27.096 ********** 2026-04-06 05:20:00.493084 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.493092 | orchestrator | 2026-04-06 05:20:00.493099 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:20:00.493106 | orchestrator | Monday 06 April 2026 05:19:57 +0000 (0:00:00.143) 0:12:27.239 ********** 2026-04-06 05:20:00.493113 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:00.493120 | orchestrator | 2026-04-06 05:20:00.493127 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:20:00.493134 | orchestrator | Monday 06 April 2026 05:19:58 +0000 (0:00:00.582) 0:12:27.822 ********** 2026-04-06 05:20:00.493141 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.493154 | orchestrator | 2026-04-06 05:20:00.493161 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:20:00.493168 | orchestrator | Monday 06 April 2026 05:19:58 +0000 (0:00:00.124) 0:12:27.946 ********** 2026-04-06 05:20:00.493175 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.493183 | orchestrator | 2026-04-06 05:20:00.493190 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:20:00.493197 | orchestrator | Monday 06 April 2026 05:19:58 +0000 (0:00:00.153) 0:12:28.100 ********** 2026-04-06 05:20:00.493204 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:00.493211 | orchestrator | 2026-04-06 05:20:00.493218 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:20:00.493225 | orchestrator | Monday 06 April 2026 05:19:58 +0000 (0:00:00.568) 0:12:28.669 ********** 2026-04-06 05:20:00.493232 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:00.493240 | orchestrator | 2026-04-06 05:20:00.493247 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:20:00.493254 | orchestrator | Monday 06 April 2026 05:19:59 +0000 (0:00:00.542) 0:12:29.211 ********** 2026-04-06 05:20:00.493261 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.493268 | orchestrator | 2026-04-06 05:20:00.493275 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:20:00.493282 | orchestrator | Monday 06 April 2026 05:19:59 +0000 (0:00:00.139) 0:12:29.350 ********** 2026-04-06 05:20:00.493290 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:00.493297 | orchestrator | 2026-04-06 05:20:00.493304 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:20:00.493311 | orchestrator | Monday 06 April 2026 05:19:59 +0000 (0:00:00.175) 0:12:29.526 ********** 2026-04-06 05:20:00.493318 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.493325 | orchestrator | 2026-04-06 05:20:00.493332 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:20:00.493339 | orchestrator | Monday 06 April 2026 05:19:59 +0000 (0:00:00.146) 0:12:29.673 ********** 2026-04-06 05:20:00.493347 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:00.493354 | orchestrator | 2026-04-06 05:20:00.493361 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:20:00.493368 | orchestrator | Monday 06 April 2026 05:20:00 +0000 (0:00:00.482) 0:12:30.156 ********** 2026-04-06 05:20:00.493380 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813201 | orchestrator | 2026-04-06 05:20:11.813318 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:20:11.813335 | orchestrator | Monday 06 April 2026 05:20:00 +0000 (0:00:00.128) 0:12:30.284 ********** 2026-04-06 05:20:11.813347 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813359 | orchestrator | 2026-04-06 05:20:11.813371 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:20:11.813382 | orchestrator | Monday 06 April 2026 05:20:00 +0000 (0:00:00.129) 0:12:30.414 ********** 2026-04-06 05:20:11.813393 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813404 | orchestrator | 2026-04-06 05:20:11.813415 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:20:11.813426 | orchestrator | Monday 06 April 2026 05:20:00 +0000 (0:00:00.134) 0:12:30.548 ********** 2026-04-06 05:20:11.813437 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:11.813448 | orchestrator | 2026-04-06 05:20:11.813459 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:20:11.813470 | orchestrator | Monday 06 April 2026 05:20:00 +0000 (0:00:00.157) 0:12:30.706 ********** 2026-04-06 05:20:11.813481 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:11.813491 | orchestrator | 2026-04-06 05:20:11.813502 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:20:11.813513 | orchestrator | Monday 06 April 2026 05:20:01 +0000 (0:00:00.158) 0:12:30.864 ********** 2026-04-06 05:20:11.813523 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:11.813556 | orchestrator | 2026-04-06 05:20:11.813568 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:20:11.813578 | orchestrator | Monday 06 April 2026 05:20:01 +0000 (0:00:00.241) 0:12:31.106 ********** 2026-04-06 05:20:11.813589 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813600 | orchestrator | 2026-04-06 05:20:11.813610 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:20:11.813621 | orchestrator | Monday 06 April 2026 05:20:01 +0000 (0:00:00.130) 0:12:31.236 ********** 2026-04-06 05:20:11.813647 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813659 | orchestrator | 2026-04-06 05:20:11.813670 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:20:11.813682 | orchestrator | Monday 06 April 2026 05:20:01 +0000 (0:00:00.132) 0:12:31.369 ********** 2026-04-06 05:20:11.813695 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813707 | orchestrator | 2026-04-06 05:20:11.813719 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:20:11.813732 | orchestrator | Monday 06 April 2026 05:20:01 +0000 (0:00:00.130) 0:12:31.500 ********** 2026-04-06 05:20:11.813745 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813755 | orchestrator | 2026-04-06 05:20:11.813766 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:20:11.813777 | orchestrator | Monday 06 April 2026 05:20:01 +0000 (0:00:00.127) 0:12:31.628 ********** 2026-04-06 05:20:11.813787 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813798 | orchestrator | 2026-04-06 05:20:11.813809 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:20:11.813820 | orchestrator | Monday 06 April 2026 05:20:02 +0000 (0:00:00.130) 0:12:31.758 ********** 2026-04-06 05:20:11.813830 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813841 | orchestrator | 2026-04-06 05:20:11.813851 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:20:11.813862 | orchestrator | Monday 06 April 2026 05:20:02 +0000 (0:00:00.458) 0:12:32.217 ********** 2026-04-06 05:20:11.813873 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813884 | orchestrator | 2026-04-06 05:20:11.813894 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:20:11.813906 | orchestrator | Monday 06 April 2026 05:20:02 +0000 (0:00:00.143) 0:12:32.361 ********** 2026-04-06 05:20:11.813916 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813927 | orchestrator | 2026-04-06 05:20:11.813937 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:20:11.813948 | orchestrator | Monday 06 April 2026 05:20:02 +0000 (0:00:00.135) 0:12:32.497 ********** 2026-04-06 05:20:11.813959 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.813969 | orchestrator | 2026-04-06 05:20:11.813980 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:20:11.813991 | orchestrator | Monday 06 April 2026 05:20:02 +0000 (0:00:00.137) 0:12:32.634 ********** 2026-04-06 05:20:11.814001 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814012 | orchestrator | 2026-04-06 05:20:11.814111 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:20:11.814122 | orchestrator | Monday 06 April 2026 05:20:03 +0000 (0:00:00.171) 0:12:32.806 ********** 2026-04-06 05:20:11.814133 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814144 | orchestrator | 2026-04-06 05:20:11.814154 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:20:11.814165 | orchestrator | Monday 06 April 2026 05:20:03 +0000 (0:00:00.122) 0:12:32.928 ********** 2026-04-06 05:20:11.814186 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814197 | orchestrator | 2026-04-06 05:20:11.814208 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:20:11.814218 | orchestrator | Monday 06 April 2026 05:20:03 +0000 (0:00:00.208) 0:12:33.137 ********** 2026-04-06 05:20:11.814229 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:11.814250 | orchestrator | 2026-04-06 05:20:11.814261 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:20:11.814272 | orchestrator | Monday 06 April 2026 05:20:04 +0000 (0:00:00.912) 0:12:34.050 ********** 2026-04-06 05:20:11.814283 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:11.814294 | orchestrator | 2026-04-06 05:20:11.814304 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:20:11.814315 | orchestrator | Monday 06 April 2026 05:20:05 +0000 (0:00:01.393) 0:12:35.443 ********** 2026-04-06 05:20:11.814326 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-06 05:20:11.814338 | orchestrator | 2026-04-06 05:20:11.814367 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:20:11.814379 | orchestrator | Monday 06 April 2026 05:20:05 +0000 (0:00:00.189) 0:12:35.633 ********** 2026-04-06 05:20:11.814390 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814401 | orchestrator | 2026-04-06 05:20:11.814412 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:20:11.814423 | orchestrator | Monday 06 April 2026 05:20:06 +0000 (0:00:00.122) 0:12:35.755 ********** 2026-04-06 05:20:11.814433 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814444 | orchestrator | 2026-04-06 05:20:11.814455 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:20:11.814466 | orchestrator | Monday 06 April 2026 05:20:06 +0000 (0:00:00.134) 0:12:35.890 ********** 2026-04-06 05:20:11.814477 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:20:11.814488 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:20:11.814499 | orchestrator | 2026-04-06 05:20:11.814509 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:20:11.814520 | orchestrator | Monday 06 April 2026 05:20:07 +0000 (0:00:01.047) 0:12:36.937 ********** 2026-04-06 05:20:11.814531 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:11.814542 | orchestrator | 2026-04-06 05:20:11.814553 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:20:11.814563 | orchestrator | Monday 06 April 2026 05:20:07 +0000 (0:00:00.456) 0:12:37.393 ********** 2026-04-06 05:20:11.814574 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814585 | orchestrator | 2026-04-06 05:20:11.814596 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:20:11.814607 | orchestrator | Monday 06 April 2026 05:20:07 +0000 (0:00:00.142) 0:12:37.536 ********** 2026-04-06 05:20:11.814617 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814628 | orchestrator | 2026-04-06 05:20:11.814645 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:20:11.814656 | orchestrator | Monday 06 April 2026 05:20:07 +0000 (0:00:00.127) 0:12:37.664 ********** 2026-04-06 05:20:11.814667 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814678 | orchestrator | 2026-04-06 05:20:11.814689 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:20:11.814700 | orchestrator | Monday 06 April 2026 05:20:08 +0000 (0:00:00.103) 0:12:37.767 ********** 2026-04-06 05:20:11.814711 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-06 05:20:11.814721 | orchestrator | 2026-04-06 05:20:11.814732 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:20:11.814743 | orchestrator | Monday 06 April 2026 05:20:08 +0000 (0:00:00.191) 0:12:37.959 ********** 2026-04-06 05:20:11.814753 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:11.814764 | orchestrator | 2026-04-06 05:20:11.814775 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:20:11.814786 | orchestrator | Monday 06 April 2026 05:20:08 +0000 (0:00:00.732) 0:12:38.691 ********** 2026-04-06 05:20:11.814797 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:20:11.814815 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:20:11.814825 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:20:11.814836 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814847 | orchestrator | 2026-04-06 05:20:11.814858 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:20:11.814869 | orchestrator | Monday 06 April 2026 05:20:09 +0000 (0:00:00.130) 0:12:38.821 ********** 2026-04-06 05:20:11.814880 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814891 | orchestrator | 2026-04-06 05:20:11.814901 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:20:11.814912 | orchestrator | Monday 06 April 2026 05:20:09 +0000 (0:00:00.098) 0:12:38.920 ********** 2026-04-06 05:20:11.814923 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814934 | orchestrator | 2026-04-06 05:20:11.814945 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:20:11.814955 | orchestrator | Monday 06 April 2026 05:20:09 +0000 (0:00:00.154) 0:12:39.075 ********** 2026-04-06 05:20:11.814966 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.814977 | orchestrator | 2026-04-06 05:20:11.814988 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:20:11.814999 | orchestrator | Monday 06 April 2026 05:20:09 +0000 (0:00:00.131) 0:12:39.207 ********** 2026-04-06 05:20:11.815009 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.815051 | orchestrator | 2026-04-06 05:20:11.815062 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:20:11.815073 | orchestrator | Monday 06 April 2026 05:20:09 +0000 (0:00:00.114) 0:12:39.321 ********** 2026-04-06 05:20:11.815084 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:11.815095 | orchestrator | 2026-04-06 05:20:11.815106 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:20:11.815116 | orchestrator | Monday 06 April 2026 05:20:09 +0000 (0:00:00.340) 0:12:39.661 ********** 2026-04-06 05:20:11.815127 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:11.815138 | orchestrator | 2026-04-06 05:20:11.815149 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:20:11.815159 | orchestrator | Monday 06 April 2026 05:20:11 +0000 (0:00:01.488) 0:12:41.149 ********** 2026-04-06 05:20:11.815170 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:11.815181 | orchestrator | 2026-04-06 05:20:11.815192 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:20:11.815202 | orchestrator | Monday 06 April 2026 05:20:11 +0000 (0:00:00.150) 0:12:41.300 ********** 2026-04-06 05:20:11.815213 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-06 05:20:11.815224 | orchestrator | 2026-04-06 05:20:11.815242 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:20:24.288177 | orchestrator | Monday 06 April 2026 05:20:11 +0000 (0:00:00.222) 0:12:41.523 ********** 2026-04-06 05:20:24.288297 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.288314 | orchestrator | 2026-04-06 05:20:24.288327 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:20:24.288339 | orchestrator | Monday 06 April 2026 05:20:11 +0000 (0:00:00.156) 0:12:41.679 ********** 2026-04-06 05:20:24.288350 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.288361 | orchestrator | 2026-04-06 05:20:24.288373 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:20:24.288384 | orchestrator | Monday 06 April 2026 05:20:12 +0000 (0:00:00.156) 0:12:41.836 ********** 2026-04-06 05:20:24.288395 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.288406 | orchestrator | 2026-04-06 05:20:24.288417 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:20:24.288428 | orchestrator | Monday 06 April 2026 05:20:12 +0000 (0:00:00.165) 0:12:42.001 ********** 2026-04-06 05:20:24.288462 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.288474 | orchestrator | 2026-04-06 05:20:24.288485 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:20:24.288497 | orchestrator | Monday 06 April 2026 05:20:12 +0000 (0:00:00.158) 0:12:42.160 ********** 2026-04-06 05:20:24.288507 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.288518 | orchestrator | 2026-04-06 05:20:24.288529 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:20:24.288540 | orchestrator | Monday 06 April 2026 05:20:12 +0000 (0:00:00.148) 0:12:42.309 ********** 2026-04-06 05:20:24.288551 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.288562 | orchestrator | 2026-04-06 05:20:24.288573 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:20:24.288584 | orchestrator | Monday 06 April 2026 05:20:12 +0000 (0:00:00.149) 0:12:42.458 ********** 2026-04-06 05:20:24.288609 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.288621 | orchestrator | 2026-04-06 05:20:24.288632 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:20:24.288643 | orchestrator | Monday 06 April 2026 05:20:12 +0000 (0:00:00.156) 0:12:42.614 ********** 2026-04-06 05:20:24.288653 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.288664 | orchestrator | 2026-04-06 05:20:24.288675 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:20:24.288686 | orchestrator | Monday 06 April 2026 05:20:13 +0000 (0:00:00.177) 0:12:42.792 ********** 2026-04-06 05:20:24.288697 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:24.288709 | orchestrator | 2026-04-06 05:20:24.288721 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:20:24.288735 | orchestrator | Monday 06 April 2026 05:20:13 +0000 (0:00:00.525) 0:12:43.318 ********** 2026-04-06 05:20:24.288748 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-06 05:20:24.288761 | orchestrator | 2026-04-06 05:20:24.288774 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:20:24.288787 | orchestrator | Monday 06 April 2026 05:20:13 +0000 (0:00:00.205) 0:12:43.523 ********** 2026-04-06 05:20:24.288800 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-06 05:20:24.288812 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-06 05:20:24.288823 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-06 05:20:24.288834 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-06 05:20:24.288845 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-06 05:20:24.288856 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-06 05:20:24.288867 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-06 05:20:24.288878 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:20:24.288889 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:20:24.288900 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:20:24.288911 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:20:24.288922 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:20:24.288933 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:20:24.288943 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:20:24.288954 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-06 05:20:24.288965 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-06 05:20:24.288976 | orchestrator | 2026-04-06 05:20:24.288987 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:20:24.288997 | orchestrator | Monday 06 April 2026 05:20:19 +0000 (0:00:05.618) 0:12:49.142 ********** 2026-04-06 05:20:24.289008 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289050 | orchestrator | 2026-04-06 05:20:24.289062 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:20:24.289073 | orchestrator | Monday 06 April 2026 05:20:19 +0000 (0:00:00.125) 0:12:49.268 ********** 2026-04-06 05:20:24.289084 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289095 | orchestrator | 2026-04-06 05:20:24.289106 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:20:24.289117 | orchestrator | Monday 06 April 2026 05:20:19 +0000 (0:00:00.137) 0:12:49.406 ********** 2026-04-06 05:20:24.289127 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289138 | orchestrator | 2026-04-06 05:20:24.289149 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:20:24.289160 | orchestrator | Monday 06 April 2026 05:20:19 +0000 (0:00:00.136) 0:12:49.542 ********** 2026-04-06 05:20:24.289171 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289182 | orchestrator | 2026-04-06 05:20:24.289193 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:20:24.289221 | orchestrator | Monday 06 April 2026 05:20:19 +0000 (0:00:00.136) 0:12:49.679 ********** 2026-04-06 05:20:24.289233 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289244 | orchestrator | 2026-04-06 05:20:24.289255 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:20:24.289266 | orchestrator | Monday 06 April 2026 05:20:20 +0000 (0:00:00.124) 0:12:49.803 ********** 2026-04-06 05:20:24.289276 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289287 | orchestrator | 2026-04-06 05:20:24.289298 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:20:24.289309 | orchestrator | Monday 06 April 2026 05:20:20 +0000 (0:00:00.135) 0:12:49.939 ********** 2026-04-06 05:20:24.289319 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289330 | orchestrator | 2026-04-06 05:20:24.289341 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:20:24.289352 | orchestrator | Monday 06 April 2026 05:20:20 +0000 (0:00:00.148) 0:12:50.087 ********** 2026-04-06 05:20:24.289363 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289373 | orchestrator | 2026-04-06 05:20:24.289384 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:20:24.289395 | orchestrator | Monday 06 April 2026 05:20:20 +0000 (0:00:00.142) 0:12:50.230 ********** 2026-04-06 05:20:24.289406 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289416 | orchestrator | 2026-04-06 05:20:24.289427 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:20:24.289438 | orchestrator | Monday 06 April 2026 05:20:20 +0000 (0:00:00.435) 0:12:50.665 ********** 2026-04-06 05:20:24.289449 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289459 | orchestrator | 2026-04-06 05:20:24.289470 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:20:24.289486 | orchestrator | Monday 06 April 2026 05:20:21 +0000 (0:00:00.134) 0:12:50.799 ********** 2026-04-06 05:20:24.289498 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289508 | orchestrator | 2026-04-06 05:20:24.289519 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:20:24.289530 | orchestrator | Monday 06 April 2026 05:20:21 +0000 (0:00:00.129) 0:12:50.929 ********** 2026-04-06 05:20:24.289540 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289551 | orchestrator | 2026-04-06 05:20:24.289562 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:20:24.289573 | orchestrator | Monday 06 April 2026 05:20:21 +0000 (0:00:00.155) 0:12:51.084 ********** 2026-04-06 05:20:24.289583 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289594 | orchestrator | 2026-04-06 05:20:24.289605 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:20:24.289616 | orchestrator | Monday 06 April 2026 05:20:21 +0000 (0:00:00.241) 0:12:51.326 ********** 2026-04-06 05:20:24.289634 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289645 | orchestrator | 2026-04-06 05:20:24.289655 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:20:24.289666 | orchestrator | Monday 06 April 2026 05:20:21 +0000 (0:00:00.150) 0:12:51.476 ********** 2026-04-06 05:20:24.289677 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289688 | orchestrator | 2026-04-06 05:20:24.289699 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:20:24.289709 | orchestrator | Monday 06 April 2026 05:20:22 +0000 (0:00:00.244) 0:12:51.720 ********** 2026-04-06 05:20:24.289720 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289731 | orchestrator | 2026-04-06 05:20:24.289741 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:20:24.289752 | orchestrator | Monday 06 April 2026 05:20:22 +0000 (0:00:00.150) 0:12:51.871 ********** 2026-04-06 05:20:24.289763 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289774 | orchestrator | 2026-04-06 05:20:24.289785 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:20:24.289796 | orchestrator | Monday 06 April 2026 05:20:22 +0000 (0:00:00.133) 0:12:52.005 ********** 2026-04-06 05:20:24.289807 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289818 | orchestrator | 2026-04-06 05:20:24.289828 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:20:24.289840 | orchestrator | Monday 06 April 2026 05:20:22 +0000 (0:00:00.148) 0:12:52.153 ********** 2026-04-06 05:20:24.289851 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289861 | orchestrator | 2026-04-06 05:20:24.289872 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:20:24.289883 | orchestrator | Monday 06 April 2026 05:20:22 +0000 (0:00:00.143) 0:12:52.297 ********** 2026-04-06 05:20:24.289895 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289913 | orchestrator | 2026-04-06 05:20:24.289932 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:20:24.289944 | orchestrator | Monday 06 April 2026 05:20:22 +0000 (0:00:00.151) 0:12:52.448 ********** 2026-04-06 05:20:24.289955 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.289966 | orchestrator | 2026-04-06 05:20:24.289977 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:20:24.289988 | orchestrator | Monday 06 April 2026 05:20:22 +0000 (0:00:00.132) 0:12:52.581 ********** 2026-04-06 05:20:24.289999 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-06 05:20:24.290010 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-06 05:20:24.290130 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-06 05:20:24.290142 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:24.290153 | orchestrator | 2026-04-06 05:20:24.290164 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:20:24.290175 | orchestrator | Monday 06 April 2026 05:20:23 +0000 (0:00:00.825) 0:12:53.407 ********** 2026-04-06 05:20:24.290186 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-06 05:20:24.290205 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-06 05:20:54.212237 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-06 05:20:54.212384 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.212410 | orchestrator | 2026-04-06 05:20:54.212429 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:20:54.212449 | orchestrator | Monday 06 April 2026 05:20:24 +0000 (0:00:01.082) 0:12:54.489 ********** 2026-04-06 05:20:54.212467 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-06 05:20:54.212485 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-06 05:20:54.212503 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-06 05:20:54.212556 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.212575 | orchestrator | 2026-04-06 05:20:54.212594 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:20:54.212612 | orchestrator | Monday 06 April 2026 05:20:25 +0000 (0:00:00.465) 0:12:54.954 ********** 2026-04-06 05:20:54.212629 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.212647 | orchestrator | 2026-04-06 05:20:54.212664 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:20:54.212681 | orchestrator | Monday 06 April 2026 05:20:25 +0000 (0:00:00.159) 0:12:55.114 ********** 2026-04-06 05:20:54.212700 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-06 05:20:54.212718 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.212736 | orchestrator | 2026-04-06 05:20:54.212756 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:20:54.212776 | orchestrator | Monday 06 April 2026 05:20:25 +0000 (0:00:00.331) 0:12:55.446 ********** 2026-04-06 05:20:54.212795 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:54.212814 | orchestrator | 2026-04-06 05:20:54.212834 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:20:54.212873 | orchestrator | Monday 06 April 2026 05:20:26 +0000 (0:00:00.849) 0:12:56.295 ********** 2026-04-06 05:20:54.212893 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:20:54.212913 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:20:54.212932 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-06 05:20:54.212952 | orchestrator | 2026-04-06 05:20:54.212971 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-06 05:20:54.212991 | orchestrator | Monday 06 April 2026 05:20:27 +0000 (0:00:00.669) 0:12:56.965 ********** 2026-04-06 05:20:54.213008 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-04-06 05:20:54.213025 | orchestrator | 2026-04-06 05:20:54.213072 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-06 05:20:54.213092 | orchestrator | Monday 06 April 2026 05:20:27 +0000 (0:00:00.212) 0:12:57.178 ********** 2026-04-06 05:20:54.213109 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:54.213127 | orchestrator | 2026-04-06 05:20:54.213144 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-06 05:20:54.213163 | orchestrator | Monday 06 April 2026 05:20:27 +0000 (0:00:00.503) 0:12:57.681 ********** 2026-04-06 05:20:54.213180 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.213198 | orchestrator | 2026-04-06 05:20:54.213216 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-06 05:20:54.213233 | orchestrator | Monday 06 April 2026 05:20:28 +0000 (0:00:00.146) 0:12:57.827 ********** 2026-04-06 05:20:54.213251 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:20:54.213269 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:20:54.213286 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:20:54.213304 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-04-06 05:20:54.213322 | orchestrator | 2026-04-06 05:20:54.213341 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-06 05:20:54.213359 | orchestrator | Monday 06 April 2026 05:20:34 +0000 (0:00:06.658) 0:13:04.486 ********** 2026-04-06 05:20:54.213377 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:54.213393 | orchestrator | 2026-04-06 05:20:54.213412 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-06 05:20:54.213430 | orchestrator | Monday 06 April 2026 05:20:35 +0000 (0:00:00.811) 0:13:05.298 ********** 2026-04-06 05:20:54.213448 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-06 05:20:54.213466 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-06 05:20:54.213503 | orchestrator | 2026-04-06 05:20:54.213521 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:20:54.213539 | orchestrator | Monday 06 April 2026 05:20:37 +0000 (0:00:02.165) 0:13:07.463 ********** 2026-04-06 05:20:54.213557 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-06 05:20:54.213575 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-06 05:20:54.213593 | orchestrator | 2026-04-06 05:20:54.213612 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-06 05:20:54.213631 | orchestrator | Monday 06 April 2026 05:20:38 +0000 (0:00:01.063) 0:13:08.527 ********** 2026-04-06 05:20:54.213649 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:54.213668 | orchestrator | 2026-04-06 05:20:54.213686 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-06 05:20:54.213705 | orchestrator | Monday 06 April 2026 05:20:39 +0000 (0:00:00.510) 0:13:09.037 ********** 2026-04-06 05:20:54.213723 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.213741 | orchestrator | 2026-04-06 05:20:54.213760 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-06 05:20:54.213778 | orchestrator | Monday 06 April 2026 05:20:39 +0000 (0:00:00.134) 0:13:09.172 ********** 2026-04-06 05:20:54.213796 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.213814 | orchestrator | 2026-04-06 05:20:54.213833 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-06 05:20:54.213882 | orchestrator | Monday 06 April 2026 05:20:39 +0000 (0:00:00.131) 0:13:09.304 ********** 2026-04-06 05:20:54.213901 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-04-06 05:20:54.213918 | orchestrator | 2026-04-06 05:20:54.213936 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-06 05:20:54.213954 | orchestrator | Monday 06 April 2026 05:20:39 +0000 (0:00:00.197) 0:13:09.501 ********** 2026-04-06 05:20:54.213973 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.213990 | orchestrator | 2026-04-06 05:20:54.214008 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-06 05:20:54.214137 | orchestrator | Monday 06 April 2026 05:20:39 +0000 (0:00:00.165) 0:13:09.666 ********** 2026-04-06 05:20:54.214157 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.214177 | orchestrator | 2026-04-06 05:20:54.214196 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-06 05:20:54.214213 | orchestrator | Monday 06 April 2026 05:20:40 +0000 (0:00:00.150) 0:13:09.816 ********** 2026-04-06 05:20:54.214231 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-04-06 05:20:54.214250 | orchestrator | 2026-04-06 05:20:54.214268 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-06 05:20:54.214288 | orchestrator | Monday 06 April 2026 05:20:40 +0000 (0:00:00.201) 0:13:10.018 ********** 2026-04-06 05:20:54.214307 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:54.214325 | orchestrator | 2026-04-06 05:20:54.214343 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-06 05:20:54.214362 | orchestrator | Monday 06 April 2026 05:20:41 +0000 (0:00:00.999) 0:13:11.018 ********** 2026-04-06 05:20:54.214380 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:54.214397 | orchestrator | 2026-04-06 05:20:54.214416 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-06 05:20:54.214447 | orchestrator | Monday 06 April 2026 05:20:42 +0000 (0:00:01.267) 0:13:12.286 ********** 2026-04-06 05:20:54.214465 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:54.214483 | orchestrator | 2026-04-06 05:20:54.214500 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-06 05:20:54.214518 | orchestrator | Monday 06 April 2026 05:20:44 +0000 (0:00:01.465) 0:13:13.751 ********** 2026-04-06 05:20:54.214535 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:20:54.214553 | orchestrator | 2026-04-06 05:20:54.214571 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-06 05:20:54.214605 | orchestrator | Monday 06 April 2026 05:20:46 +0000 (0:00:02.868) 0:13:16.620 ********** 2026-04-06 05:20:54.214624 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-06 05:20:54.214642 | orchestrator | 2026-04-06 05:20:54.214660 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-06 05:20:54.214678 | orchestrator | Monday 06 April 2026 05:20:47 +0000 (0:00:00.638) 0:13:17.258 ********** 2026-04-06 05:20:54.214697 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:20:54.214715 | orchestrator | 2026-04-06 05:20:54.214734 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-06 05:20:54.214751 | orchestrator | Monday 06 April 2026 05:20:48 +0000 (0:00:01.356) 0:13:18.614 ********** 2026-04-06 05:20:54.214769 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:20:54.214787 | orchestrator | 2026-04-06 05:20:54.214806 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-06 05:20:54.214824 | orchestrator | Monday 06 April 2026 05:20:50 +0000 (0:00:01.395) 0:13:20.010 ********** 2026-04-06 05:20:54.214841 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:54.214861 | orchestrator | 2026-04-06 05:20:54.214879 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-06 05:20:54.214898 | orchestrator | Monday 06 April 2026 05:20:50 +0000 (0:00:00.303) 0:13:20.313 ********** 2026-04-06 05:20:54.214916 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:20:54.214935 | orchestrator | 2026-04-06 05:20:54.214952 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-06 05:20:54.214970 | orchestrator | Monday 06 April 2026 05:20:50 +0000 (0:00:00.170) 0:13:20.483 ********** 2026-04-06 05:20:54.214988 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-06 05:20:54.215006 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-06 05:20:54.215023 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.215125 | orchestrator | 2026-04-06 05:20:54.215148 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-06 05:20:54.215167 | orchestrator | Monday 06 April 2026 05:20:51 +0000 (0:00:00.361) 0:13:20.845 ********** 2026-04-06 05:20:54.215186 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-06 05:20:54.215204 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-06 05:20:54.215222 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-06 05:20:54.215241 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-06 05:20:54.215259 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:20:54.215276 | orchestrator | 2026-04-06 05:20:54.215295 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-04-06 05:20:54.215312 | orchestrator | 2026-04-06 05:20:54.215330 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:20:54.215348 | orchestrator | Monday 06 April 2026 05:20:52 +0000 (0:00:01.849) 0:13:22.695 ********** 2026-04-06 05:20:54.215366 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:20:54.215386 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:20:54.215405 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:20:54.215423 | orchestrator | 2026-04-06 05:20:54.215441 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:20:54.215459 | orchestrator | Monday 06 April 2026 05:20:53 +0000 (0:00:00.675) 0:13:23.370 ********** 2026-04-06 05:20:54.215476 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:20:54.215494 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:20:54.215512 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:20:54.215529 | orchestrator | 2026-04-06 05:20:54.215568 | orchestrator | TASK [Get pool list] *********************************************************** 2026-04-06 05:20:58.980001 | orchestrator | Monday 06 April 2026 05:20:54 +0000 (0:00:00.549) 0:13:23.919 ********** 2026-04-06 05:20:58.980218 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:20:58.980243 | orchestrator | 2026-04-06 05:20:58.980299 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-04-06 05:20:58.980319 | orchestrator | Monday 06 April 2026 05:20:56 +0000 (0:00:02.018) 0:13:25.937 ********** 2026-04-06 05:20:58.980337 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:20:58.980356 | orchestrator | 2026-04-06 05:20:58.980374 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-04-06 05:20:58.980392 | orchestrator | Monday 06 April 2026 05:20:58 +0000 (0:00:01.947) 0:13:27.884 ********** 2026-04-06 05:20:58.980439 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-04-06T03:00:54.147002+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:58.980498 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-04-06T03:02:08.808065+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '33', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:58.980546 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-04-06T03:02:12.515094+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:58.980579 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-04-06T03:03:15.277173+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '69', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:59.400648 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-04-06T03:03:20.977109+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '69', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:59.400738 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-04-06T03:03:27.249769+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:59.400789 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-04-06T03:03:33.625427+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '170', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:59.400800 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-04-06T03:03:39.858476+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:59.400824 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-04-06T03:03:52.294245+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '125', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '117', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:59.728845 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-04-06T03:04:41.715202+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '108', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 108, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:59.729007 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-04-06T03:04:51.891346+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '116', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 116, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:59.729035 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-04-06T03:05:00.952807+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '182', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 182, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:20:59.729110 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-04-06T03:05:09.836471+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '134', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 134, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:22:29.686750 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-04-06T03:05:18.720177+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '142', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 142, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-06 05:22:29.686864 | orchestrator | 2026-04-06 05:22:29.686876 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-04-06 05:22:29.686884 | orchestrator | Monday 06 April 2026 05:21:00 +0000 (0:00:02.414) 0:13:30.298 ********** 2026-04-06 05:22:29.686892 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:22:29.686899 | orchestrator | 2026-04-06 05:22:29.686906 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-04-06 05:22:29.686912 | orchestrator | Monday 06 April 2026 05:21:02 +0000 (0:00:01.927) 0:13:32.226 ********** 2026-04-06 05:22:29.686919 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-06 05:22:29.686927 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-06 05:22:29.686934 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-06 05:22:29.686941 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-06 05:22:29.686949 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-06 05:22:29.686969 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-06 05:22:29.686975 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-06 05:22:29.686982 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-06 05:22:29.686989 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-06 05:22:29.686996 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-06 05:22:29.687002 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-06 05:22:29.687009 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-06 05:22:29.687016 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-06 05:22:29.687022 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-06 05:22:29.687029 | orchestrator | 2026-04-06 05:22:29.687036 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-04-06 05:22:29.687054 | orchestrator | Monday 06 April 2026 05:22:18 +0000 (0:01:16.100) 0:14:48.327 ********** 2026-04-06 05:22:29.687061 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-06 05:22:29.687068 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-06 05:22:29.687074 | orchestrator | 2026-04-06 05:22:29.687081 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-06 05:22:29.687116 | orchestrator | 2026-04-06 05:22:29.687123 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:22:29.687130 | orchestrator | Monday 06 April 2026 05:22:23 +0000 (0:00:04.954) 0:14:53.281 ********** 2026-04-06 05:22:29.687137 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-06 05:22:29.687149 | orchestrator | 2026-04-06 05:22:29.687156 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:22:29.687163 | orchestrator | Monday 06 April 2026 05:22:23 +0000 (0:00:00.233) 0:14:53.515 ********** 2026-04-06 05:22:29.687170 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:29.687177 | orchestrator | 2026-04-06 05:22:29.687183 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:22:29.687190 | orchestrator | Monday 06 April 2026 05:22:24 +0000 (0:00:00.509) 0:14:54.024 ********** 2026-04-06 05:22:29.687197 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:29.687203 | orchestrator | 2026-04-06 05:22:29.687210 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:22:29.687217 | orchestrator | Monday 06 April 2026 05:22:24 +0000 (0:00:00.133) 0:14:54.158 ********** 2026-04-06 05:22:29.687223 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:29.687230 | orchestrator | 2026-04-06 05:22:29.687237 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:22:29.687243 | orchestrator | Monday 06 April 2026 05:22:24 +0000 (0:00:00.448) 0:14:54.606 ********** 2026-04-06 05:22:29.687250 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:29.687256 | orchestrator | 2026-04-06 05:22:29.687263 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:22:29.687270 | orchestrator | Monday 06 April 2026 05:22:25 +0000 (0:00:00.148) 0:14:54.755 ********** 2026-04-06 05:22:29.687276 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:29.687283 | orchestrator | 2026-04-06 05:22:29.687290 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:22:29.687297 | orchestrator | Monday 06 April 2026 05:22:25 +0000 (0:00:00.138) 0:14:54.893 ********** 2026-04-06 05:22:29.687304 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:29.687311 | orchestrator | 2026-04-06 05:22:29.687319 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:22:29.687327 | orchestrator | Monday 06 April 2026 05:22:25 +0000 (0:00:00.473) 0:14:55.366 ********** 2026-04-06 05:22:29.687336 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:29.687344 | orchestrator | 2026-04-06 05:22:29.687352 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:22:29.687359 | orchestrator | Monday 06 April 2026 05:22:25 +0000 (0:00:00.142) 0:14:55.509 ********** 2026-04-06 05:22:29.687367 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:29.687375 | orchestrator | 2026-04-06 05:22:29.687383 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:22:29.687391 | orchestrator | Monday 06 April 2026 05:22:25 +0000 (0:00:00.161) 0:14:55.671 ********** 2026-04-06 05:22:29.687399 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:22:29.687407 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:22:29.687415 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:22:29.687422 | orchestrator | 2026-04-06 05:22:29.687433 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:22:29.687444 | orchestrator | Monday 06 April 2026 05:22:26 +0000 (0:00:00.700) 0:14:56.371 ********** 2026-04-06 05:22:29.687455 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:29.687464 | orchestrator | 2026-04-06 05:22:29.687472 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:22:29.687480 | orchestrator | Monday 06 April 2026 05:22:26 +0000 (0:00:00.289) 0:14:56.661 ********** 2026-04-06 05:22:29.687487 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:22:29.687495 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:22:29.687507 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:22:29.687519 | orchestrator | 2026-04-06 05:22:29.687527 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:22:29.687535 | orchestrator | Monday 06 April 2026 05:22:28 +0000 (0:00:01.912) 0:14:58.574 ********** 2026-04-06 05:22:29.687542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 05:22:29.687550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 05:22:29.687558 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 05:22:29.687566 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:29.687574 | orchestrator | 2026-04-06 05:22:29.687582 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:22:29.687590 | orchestrator | Monday 06 April 2026 05:22:29 +0000 (0:00:00.429) 0:14:59.003 ********** 2026-04-06 05:22:29.687599 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:22:29.687614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.408963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.409156 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:34.409190 | orchestrator | 2026-04-06 05:22:34.409211 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:22:34.409229 | orchestrator | Monday 06 April 2026 05:22:29 +0000 (0:00:00.632) 0:14:59.636 ********** 2026-04-06 05:22:34.409250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.409270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.409288 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.409298 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:34.409308 | orchestrator | 2026-04-06 05:22:34.409318 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:22:34.409328 | orchestrator | Monday 06 April 2026 05:22:30 +0000 (0:00:00.177) 0:14:59.814 ********** 2026-04-06 05:22:34.409343 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:22:27.520904', 'end': '2026-04-06 05:22:27.571974', 'delta': '0:00:00.051070', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:22:34.409414 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:22:28.105115', 'end': '2026-04-06 05:22:28.161556', 'delta': '0:00:00.056441', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:22:34.409459 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:22:28.659879', 'end': '2026-04-06 05:22:28.707871', 'delta': '0:00:00.047992', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:22:34.409477 | orchestrator | 2026-04-06 05:22:34.409494 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:22:34.409511 | orchestrator | Monday 06 April 2026 05:22:30 +0000 (0:00:00.256) 0:15:00.070 ********** 2026-04-06 05:22:34.409528 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:34.409545 | orchestrator | 2026-04-06 05:22:34.409562 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:22:34.409578 | orchestrator | Monday 06 April 2026 05:22:30 +0000 (0:00:00.255) 0:15:00.325 ********** 2026-04-06 05:22:34.409596 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:34.409613 | orchestrator | 2026-04-06 05:22:34.409630 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:22:34.409648 | orchestrator | Monday 06 April 2026 05:22:30 +0000 (0:00:00.245) 0:15:00.571 ********** 2026-04-06 05:22:34.409665 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:34.409682 | orchestrator | 2026-04-06 05:22:34.409701 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:22:34.409718 | orchestrator | Monday 06 April 2026 05:22:31 +0000 (0:00:00.151) 0:15:00.722 ********** 2026-04-06 05:22:34.409736 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:22:34.409752 | orchestrator | 2026-04-06 05:22:34.409768 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:22:34.409785 | orchestrator | Monday 06 April 2026 05:22:32 +0000 (0:00:01.410) 0:15:02.133 ********** 2026-04-06 05:22:34.409802 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:34.409819 | orchestrator | 2026-04-06 05:22:34.409836 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:22:34.409853 | orchestrator | Monday 06 April 2026 05:22:32 +0000 (0:00:00.448) 0:15:02.581 ********** 2026-04-06 05:22:34.409870 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:34.409887 | orchestrator | 2026-04-06 05:22:34.409905 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:22:34.409923 | orchestrator | Monday 06 April 2026 05:22:32 +0000 (0:00:00.124) 0:15:02.706 ********** 2026-04-06 05:22:34.409964 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:34.409983 | orchestrator | 2026-04-06 05:22:34.410000 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:22:34.410080 | orchestrator | Monday 06 April 2026 05:22:33 +0000 (0:00:00.230) 0:15:02.937 ********** 2026-04-06 05:22:34.410171 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:34.410189 | orchestrator | 2026-04-06 05:22:34.410206 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:22:34.410223 | orchestrator | Monday 06 April 2026 05:22:33 +0000 (0:00:00.124) 0:15:03.062 ********** 2026-04-06 05:22:34.410239 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:34.410255 | orchestrator | 2026-04-06 05:22:34.410282 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:22:34.410301 | orchestrator | Monday 06 April 2026 05:22:33 +0000 (0:00:00.136) 0:15:03.198 ********** 2026-04-06 05:22:34.410318 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:34.410334 | orchestrator | 2026-04-06 05:22:34.410350 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:22:34.410365 | orchestrator | Monday 06 April 2026 05:22:33 +0000 (0:00:00.193) 0:15:03.392 ********** 2026-04-06 05:22:34.410381 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:34.410395 | orchestrator | 2026-04-06 05:22:34.410409 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:22:34.410425 | orchestrator | Monday 06 April 2026 05:22:33 +0000 (0:00:00.179) 0:15:03.572 ********** 2026-04-06 05:22:34.410440 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:34.410456 | orchestrator | 2026-04-06 05:22:34.410472 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:22:34.410488 | orchestrator | Monday 06 April 2026 05:22:34 +0000 (0:00:00.170) 0:15:03.743 ********** 2026-04-06 05:22:34.410503 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:34.410520 | orchestrator | 2026-04-06 05:22:34.410537 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:22:34.410554 | orchestrator | Monday 06 April 2026 05:22:34 +0000 (0:00:00.127) 0:15:03.871 ********** 2026-04-06 05:22:34.410570 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:34.410586 | orchestrator | 2026-04-06 05:22:34.410612 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:22:34.410629 | orchestrator | Monday 06 April 2026 05:22:34 +0000 (0:00:00.167) 0:15:04.039 ********** 2026-04-06 05:22:34.410647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:22:34.410684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'uuids': ['568ee26d-bc52-45e1-a610-bd1b65a33bb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS']}})  2026-04-06 05:22:34.528222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71f71275', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:22:34.528317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33']}})  2026-04-06 05:22:34.528329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:22:34.528339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:22:34.528359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:22:34.528367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:22:34.528374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3', 'dm-uuid-CRYPT-LUKS2-9b11f78520334917a26820c7a917e496-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:22:34.528394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:22:34.528407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'uuids': ['9b11f785-2033-4917-a268-20c7a917e496'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3']}})  2026-04-06 05:22:34.528415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c']}})  2026-04-06 05:22:34.528422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:22:34.528442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d494db8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:22:34.842667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:22:34.842767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:22:34.842783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS', 'dm-uuid-CRYPT-LUKS2-568ee26dbc5245e1a610bd1b65a33bb1-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:22:34.842798 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:34.842812 | orchestrator | 2026-04-06 05:22:34.842824 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:22:34.842836 | orchestrator | Monday 06 April 2026 05:22:34 +0000 (0:00:00.338) 0:15:04.377 ********** 2026-04-06 05:22:34.842849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.842880 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'uuids': ['568ee26d-bc52-45e1-a610-bd1b65a33bb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.842893 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71f71275', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.842942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.842958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.842970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.842988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.843000 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:34.843018 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3', 'dm-uuid-CRYPT-LUKS2-9b11f78520334917a26820c7a917e496-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:36.532541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:36.532651 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'uuids': ['9b11f785-2033-4917-a268-20c7a917e496'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:36.532669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:36.532701 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:36.532736 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d494db8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:36.532773 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:36.532786 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:36.532804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS', 'dm-uuid-CRYPT-LUKS2-568ee26dbc5245e1a610bd1b65a33bb1-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:22:36.532825 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:36.532839 | orchestrator | 2026-04-06 05:22:36.532851 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:22:36.532864 | orchestrator | Monday 06 April 2026 05:22:35 +0000 (0:00:00.386) 0:15:04.764 ********** 2026-04-06 05:22:36.532875 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:36.532887 | orchestrator | 2026-04-06 05:22:36.532899 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:22:36.532910 | orchestrator | Monday 06 April 2026 05:22:35 +0000 (0:00:00.498) 0:15:05.262 ********** 2026-04-06 05:22:36.532921 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:36.532932 | orchestrator | 2026-04-06 05:22:36.532943 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:22:36.532954 | orchestrator | Monday 06 April 2026 05:22:35 +0000 (0:00:00.444) 0:15:05.707 ********** 2026-04-06 05:22:36.532965 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:36.532976 | orchestrator | 2026-04-06 05:22:36.532987 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:22:36.533006 | orchestrator | Monday 06 April 2026 05:22:36 +0000 (0:00:00.540) 0:15:06.247 ********** 2026-04-06 05:22:50.929092 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.929245 | orchestrator | 2026-04-06 05:22:50.929262 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:22:50.929275 | orchestrator | Monday 06 April 2026 05:22:36 +0000 (0:00:00.130) 0:15:06.377 ********** 2026-04-06 05:22:50.929286 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.929298 | orchestrator | 2026-04-06 05:22:50.929309 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:22:50.929320 | orchestrator | Monday 06 April 2026 05:22:36 +0000 (0:00:00.252) 0:15:06.630 ********** 2026-04-06 05:22:50.929331 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.929342 | orchestrator | 2026-04-06 05:22:50.929353 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:22:50.929364 | orchestrator | Monday 06 April 2026 05:22:37 +0000 (0:00:00.224) 0:15:06.855 ********** 2026-04-06 05:22:50.929375 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-06 05:22:50.929386 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-06 05:22:50.929397 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-06 05:22:50.929408 | orchestrator | 2026-04-06 05:22:50.929419 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:22:50.929430 | orchestrator | Monday 06 April 2026 05:22:37 +0000 (0:00:00.729) 0:15:07.584 ********** 2026-04-06 05:22:50.929441 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 05:22:50.929452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 05:22:50.929463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 05:22:50.929473 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.929484 | orchestrator | 2026-04-06 05:22:50.929495 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:22:50.929506 | orchestrator | Monday 06 April 2026 05:22:38 +0000 (0:00:00.195) 0:15:07.779 ********** 2026-04-06 05:22:50.929517 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-06 05:22:50.929529 | orchestrator | 2026-04-06 05:22:50.929547 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:22:50.929567 | orchestrator | Monday 06 April 2026 05:22:38 +0000 (0:00:00.292) 0:15:08.071 ********** 2026-04-06 05:22:50.929578 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.929589 | orchestrator | 2026-04-06 05:22:50.929600 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:22:50.929640 | orchestrator | Monday 06 April 2026 05:22:38 +0000 (0:00:00.189) 0:15:08.261 ********** 2026-04-06 05:22:50.929653 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.929666 | orchestrator | 2026-04-06 05:22:50.929679 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:22:50.929691 | orchestrator | Monday 06 April 2026 05:22:38 +0000 (0:00:00.152) 0:15:08.413 ********** 2026-04-06 05:22:50.929704 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.929716 | orchestrator | 2026-04-06 05:22:50.929728 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:22:50.929741 | orchestrator | Monday 06 April 2026 05:22:38 +0000 (0:00:00.150) 0:15:08.563 ********** 2026-04-06 05:22:50.929754 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:50.929766 | orchestrator | 2026-04-06 05:22:50.929779 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:22:50.929791 | orchestrator | Monday 06 April 2026 05:22:39 +0000 (0:00:00.290) 0:15:08.854 ********** 2026-04-06 05:22:50.929804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:22:50.929830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:22:50.929843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:22:50.929856 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.929869 | orchestrator | 2026-04-06 05:22:50.929883 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:22:50.929895 | orchestrator | Monday 06 April 2026 05:22:40 +0000 (0:00:01.089) 0:15:09.943 ********** 2026-04-06 05:22:50.929908 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:22:50.929921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:22:50.929934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:22:50.929947 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.929960 | orchestrator | 2026-04-06 05:22:50.929973 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:22:50.929985 | orchestrator | Monday 06 April 2026 05:22:40 +0000 (0:00:00.419) 0:15:10.363 ********** 2026-04-06 05:22:50.929996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:22:50.930006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:22:50.930071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:22:50.930083 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.930094 | orchestrator | 2026-04-06 05:22:50.930124 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:22:50.930135 | orchestrator | Monday 06 April 2026 05:22:41 +0000 (0:00:00.394) 0:15:10.758 ********** 2026-04-06 05:22:50.930146 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:50.930156 | orchestrator | 2026-04-06 05:22:50.930167 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:22:50.930178 | orchestrator | Monday 06 April 2026 05:22:41 +0000 (0:00:00.165) 0:15:10.923 ********** 2026-04-06 05:22:50.930188 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-06 05:22:50.930199 | orchestrator | 2026-04-06 05:22:50.930209 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:22:50.930220 | orchestrator | Monday 06 April 2026 05:22:41 +0000 (0:00:00.341) 0:15:11.264 ********** 2026-04-06 05:22:50.930248 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:22:50.930260 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:22:50.930271 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:22:50.930282 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-06 05:22:50.930292 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:22:50.930303 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:22:50.930324 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:22:50.930335 | orchestrator | 2026-04-06 05:22:50.930346 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:22:50.930357 | orchestrator | Monday 06 April 2026 05:22:42 +0000 (0:00:00.856) 0:15:12.120 ********** 2026-04-06 05:22:50.930368 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:22:50.930379 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:22:50.930389 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:22:50.930400 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-06 05:22:50.930411 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:22:50.930422 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:22:50.930433 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:22:50.930443 | orchestrator | 2026-04-06 05:22:50.930454 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-06 05:22:50.930465 | orchestrator | Monday 06 April 2026 05:22:44 +0000 (0:00:01.687) 0:15:13.808 ********** 2026-04-06 05:22:50.930475 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:50.930486 | orchestrator | 2026-04-06 05:22:50.930497 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-06 05:22:50.930508 | orchestrator | Monday 06 April 2026 05:22:44 +0000 (0:00:00.484) 0:15:14.293 ********** 2026-04-06 05:22:50.930518 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:50.930529 | orchestrator | 2026-04-06 05:22:50.930540 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-06 05:22:50.930551 | orchestrator | Monday 06 April 2026 05:22:44 +0000 (0:00:00.136) 0:15:14.429 ********** 2026-04-06 05:22:50.930561 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:50.930572 | orchestrator | 2026-04-06 05:22:50.930583 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-06 05:22:50.930594 | orchestrator | Monday 06 April 2026 05:22:44 +0000 (0:00:00.238) 0:15:14.668 ********** 2026-04-06 05:22:50.930604 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-06 05:22:50.930615 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-06 05:22:50.930626 | orchestrator | 2026-04-06 05:22:50.930637 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:22:50.930647 | orchestrator | Monday 06 April 2026 05:22:48 +0000 (0:00:03.091) 0:15:17.759 ********** 2026-04-06 05:22:50.930658 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-06 05:22:50.930669 | orchestrator | 2026-04-06 05:22:50.930680 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:22:50.930696 | orchestrator | Monday 06 April 2026 05:22:48 +0000 (0:00:00.487) 0:15:18.246 ********** 2026-04-06 05:22:50.930707 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-06 05:22:50.930718 | orchestrator | 2026-04-06 05:22:50.930729 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:22:50.930739 | orchestrator | Monday 06 April 2026 05:22:48 +0000 (0:00:00.219) 0:15:18.465 ********** 2026-04-06 05:22:50.930750 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.930761 | orchestrator | 2026-04-06 05:22:50.930772 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:22:50.930782 | orchestrator | Monday 06 April 2026 05:22:48 +0000 (0:00:00.133) 0:15:18.599 ********** 2026-04-06 05:22:50.930793 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:50.930804 | orchestrator | 2026-04-06 05:22:50.930814 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:22:50.930832 | orchestrator | Monday 06 April 2026 05:22:49 +0000 (0:00:00.523) 0:15:19.122 ********** 2026-04-06 05:22:50.930843 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:50.930853 | orchestrator | 2026-04-06 05:22:50.930864 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:22:50.930875 | orchestrator | Monday 06 April 2026 05:22:49 +0000 (0:00:00.540) 0:15:19.663 ********** 2026-04-06 05:22:50.930886 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:22:50.930897 | orchestrator | 2026-04-06 05:22:50.930907 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:22:50.930918 | orchestrator | Monday 06 April 2026 05:22:50 +0000 (0:00:00.561) 0:15:20.224 ********** 2026-04-06 05:22:50.930929 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.930940 | orchestrator | 2026-04-06 05:22:50.930956 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:22:50.930974 | orchestrator | Monday 06 April 2026 05:22:50 +0000 (0:00:00.145) 0:15:20.370 ********** 2026-04-06 05:22:50.930992 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.931008 | orchestrator | 2026-04-06 05:22:50.931027 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:22:50.931045 | orchestrator | Monday 06 April 2026 05:22:50 +0000 (0:00:00.139) 0:15:20.510 ********** 2026-04-06 05:22:50.931063 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:22:50.931083 | orchestrator | 2026-04-06 05:22:50.931138 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:23:01.869559 | orchestrator | Monday 06 April 2026 05:22:50 +0000 (0:00:00.129) 0:15:20.639 ********** 2026-04-06 05:23:01.869673 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.869689 | orchestrator | 2026-04-06 05:23:01.869702 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:23:01.869713 | orchestrator | Monday 06 April 2026 05:22:51 +0000 (0:00:00.573) 0:15:21.213 ********** 2026-04-06 05:23:01.869725 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.869736 | orchestrator | 2026-04-06 05:23:01.869747 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:23:01.869758 | orchestrator | Monday 06 April 2026 05:22:52 +0000 (0:00:00.529) 0:15:21.742 ********** 2026-04-06 05:23:01.869770 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.869782 | orchestrator | 2026-04-06 05:23:01.869793 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:23:01.869804 | orchestrator | Monday 06 April 2026 05:22:52 +0000 (0:00:00.132) 0:15:21.875 ********** 2026-04-06 05:23:01.869815 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.869826 | orchestrator | 2026-04-06 05:23:01.869837 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:23:01.869848 | orchestrator | Monday 06 April 2026 05:22:52 +0000 (0:00:00.425) 0:15:22.300 ********** 2026-04-06 05:23:01.869859 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.869870 | orchestrator | 2026-04-06 05:23:01.869881 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:23:01.869892 | orchestrator | Monday 06 April 2026 05:22:52 +0000 (0:00:00.154) 0:15:22.454 ********** 2026-04-06 05:23:01.869903 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.869916 | orchestrator | 2026-04-06 05:23:01.869927 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:23:01.869938 | orchestrator | Monday 06 April 2026 05:22:52 +0000 (0:00:00.147) 0:15:22.602 ********** 2026-04-06 05:23:01.869949 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.869960 | orchestrator | 2026-04-06 05:23:01.869971 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:23:01.869982 | orchestrator | Monday 06 April 2026 05:22:53 +0000 (0:00:00.166) 0:15:22.768 ********** 2026-04-06 05:23:01.869993 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870004 | orchestrator | 2026-04-06 05:23:01.870015 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:23:01.870166 | orchestrator | Monday 06 April 2026 05:22:53 +0000 (0:00:00.135) 0:15:22.904 ********** 2026-04-06 05:23:01.870178 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870189 | orchestrator | 2026-04-06 05:23:01.870200 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:23:01.870211 | orchestrator | Monday 06 April 2026 05:22:53 +0000 (0:00:00.140) 0:15:23.045 ********** 2026-04-06 05:23:01.870222 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870233 | orchestrator | 2026-04-06 05:23:01.870244 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:23:01.870254 | orchestrator | Monday 06 April 2026 05:22:53 +0000 (0:00:00.128) 0:15:23.173 ********** 2026-04-06 05:23:01.870265 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.870276 | orchestrator | 2026-04-06 05:23:01.870287 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:23:01.870298 | orchestrator | Monday 06 April 2026 05:22:53 +0000 (0:00:00.164) 0:15:23.337 ********** 2026-04-06 05:23:01.870308 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.870319 | orchestrator | 2026-04-06 05:23:01.870330 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:23:01.870341 | orchestrator | Monday 06 April 2026 05:22:53 +0000 (0:00:00.223) 0:15:23.561 ********** 2026-04-06 05:23:01.870352 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870363 | orchestrator | 2026-04-06 05:23:01.870382 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:23:01.870394 | orchestrator | Monday 06 April 2026 05:22:53 +0000 (0:00:00.128) 0:15:23.689 ********** 2026-04-06 05:23:01.870404 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870415 | orchestrator | 2026-04-06 05:23:01.870426 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:23:01.870437 | orchestrator | Monday 06 April 2026 05:22:54 +0000 (0:00:00.126) 0:15:23.816 ********** 2026-04-06 05:23:01.870447 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870458 | orchestrator | 2026-04-06 05:23:01.870469 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:23:01.870480 | orchestrator | Monday 06 April 2026 05:22:54 +0000 (0:00:00.126) 0:15:23.943 ********** 2026-04-06 05:23:01.870490 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870501 | orchestrator | 2026-04-06 05:23:01.870512 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:23:01.870523 | orchestrator | Monday 06 April 2026 05:22:54 +0000 (0:00:00.457) 0:15:24.400 ********** 2026-04-06 05:23:01.870534 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870545 | orchestrator | 2026-04-06 05:23:01.870555 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:23:01.870566 | orchestrator | Monday 06 April 2026 05:22:54 +0000 (0:00:00.122) 0:15:24.523 ********** 2026-04-06 05:23:01.870577 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870588 | orchestrator | 2026-04-06 05:23:01.870599 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:23:01.870609 | orchestrator | Monday 06 April 2026 05:22:54 +0000 (0:00:00.142) 0:15:24.665 ********** 2026-04-06 05:23:01.870620 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870631 | orchestrator | 2026-04-06 05:23:01.870642 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:23:01.870653 | orchestrator | Monday 06 April 2026 05:22:55 +0000 (0:00:00.135) 0:15:24.801 ********** 2026-04-06 05:23:01.870664 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870675 | orchestrator | 2026-04-06 05:23:01.870685 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:23:01.870696 | orchestrator | Monday 06 April 2026 05:22:55 +0000 (0:00:00.139) 0:15:24.941 ********** 2026-04-06 05:23:01.870725 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870737 | orchestrator | 2026-04-06 05:23:01.870748 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:23:01.870767 | orchestrator | Monday 06 April 2026 05:22:55 +0000 (0:00:00.129) 0:15:25.070 ********** 2026-04-06 05:23:01.870778 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870788 | orchestrator | 2026-04-06 05:23:01.870799 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:23:01.870810 | orchestrator | Monday 06 April 2026 05:22:55 +0000 (0:00:00.127) 0:15:25.198 ********** 2026-04-06 05:23:01.870821 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870832 | orchestrator | 2026-04-06 05:23:01.870842 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:23:01.870853 | orchestrator | Monday 06 April 2026 05:22:55 +0000 (0:00:00.122) 0:15:25.320 ********** 2026-04-06 05:23:01.870864 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.870875 | orchestrator | 2026-04-06 05:23:01.870886 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:23:01.870896 | orchestrator | Monday 06 April 2026 05:22:55 +0000 (0:00:00.205) 0:15:25.526 ********** 2026-04-06 05:23:01.870907 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.870918 | orchestrator | 2026-04-06 05:23:01.870929 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:23:01.870939 | orchestrator | Monday 06 April 2026 05:22:56 +0000 (0:00:00.910) 0:15:26.437 ********** 2026-04-06 05:23:01.870950 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.870961 | orchestrator | 2026-04-06 05:23:01.870971 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:23:01.870982 | orchestrator | Monday 06 April 2026 05:22:57 +0000 (0:00:01.249) 0:15:27.686 ********** 2026-04-06 05:23:01.870993 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-06 05:23:01.871004 | orchestrator | 2026-04-06 05:23:01.871015 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:23:01.871026 | orchestrator | Monday 06 April 2026 05:22:58 +0000 (0:00:00.227) 0:15:27.914 ********** 2026-04-06 05:23:01.871037 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.871048 | orchestrator | 2026-04-06 05:23:01.871058 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:23:01.871069 | orchestrator | Monday 06 April 2026 05:22:58 +0000 (0:00:00.418) 0:15:28.332 ********** 2026-04-06 05:23:01.871080 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.871091 | orchestrator | 2026-04-06 05:23:01.871102 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:23:01.871131 | orchestrator | Monday 06 April 2026 05:22:58 +0000 (0:00:00.150) 0:15:28.483 ********** 2026-04-06 05:23:01.871141 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:23:01.871152 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:23:01.871169 | orchestrator | 2026-04-06 05:23:01.871188 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:23:01.871208 | orchestrator | Monday 06 April 2026 05:22:59 +0000 (0:00:00.840) 0:15:29.323 ********** 2026-04-06 05:23:01.871225 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.871244 | orchestrator | 2026-04-06 05:23:01.871264 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:23:01.871284 | orchestrator | Monday 06 April 2026 05:23:00 +0000 (0:00:00.502) 0:15:29.826 ********** 2026-04-06 05:23:01.871304 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.871325 | orchestrator | 2026-04-06 05:23:01.871345 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:23:01.871372 | orchestrator | Monday 06 April 2026 05:23:00 +0000 (0:00:00.151) 0:15:29.978 ********** 2026-04-06 05:23:01.871392 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.871410 | orchestrator | 2026-04-06 05:23:01.871428 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:23:01.871447 | orchestrator | Monday 06 April 2026 05:23:00 +0000 (0:00:00.177) 0:15:30.155 ********** 2026-04-06 05:23:01.871477 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.871496 | orchestrator | 2026-04-06 05:23:01.871514 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:23:01.871534 | orchestrator | Monday 06 April 2026 05:23:00 +0000 (0:00:00.142) 0:15:30.298 ********** 2026-04-06 05:23:01.871555 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-06 05:23:01.871575 | orchestrator | 2026-04-06 05:23:01.871594 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:23:01.871614 | orchestrator | Monday 06 April 2026 05:23:00 +0000 (0:00:00.203) 0:15:30.502 ********** 2026-04-06 05:23:01.871625 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:01.871636 | orchestrator | 2026-04-06 05:23:01.871647 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:23:01.871657 | orchestrator | Monday 06 April 2026 05:23:01 +0000 (0:00:00.738) 0:15:31.240 ********** 2026-04-06 05:23:01.871668 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:23:01.871679 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:23:01.871690 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:23:01.871701 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.871712 | orchestrator | 2026-04-06 05:23:01.871723 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:23:01.871734 | orchestrator | Monday 06 April 2026 05:23:01 +0000 (0:00:00.139) 0:15:31.379 ********** 2026-04-06 05:23:01.871745 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:01.871755 | orchestrator | 2026-04-06 05:23:01.871766 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:23:01.871777 | orchestrator | Monday 06 April 2026 05:23:01 +0000 (0:00:00.118) 0:15:31.498 ********** 2026-04-06 05:23:01.871799 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.643998 | orchestrator | 2026-04-06 05:23:19.644144 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:23:19.644163 | orchestrator | Monday 06 April 2026 05:23:01 +0000 (0:00:00.183) 0:15:31.681 ********** 2026-04-06 05:23:19.644175 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644188 | orchestrator | 2026-04-06 05:23:19.644200 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:23:19.644211 | orchestrator | Monday 06 April 2026 05:23:02 +0000 (0:00:00.462) 0:15:32.144 ********** 2026-04-06 05:23:19.644222 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644233 | orchestrator | 2026-04-06 05:23:19.644244 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:23:19.644255 | orchestrator | Monday 06 April 2026 05:23:02 +0000 (0:00:00.155) 0:15:32.300 ********** 2026-04-06 05:23:19.644266 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644277 | orchestrator | 2026-04-06 05:23:19.644288 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:23:19.644299 | orchestrator | Monday 06 April 2026 05:23:02 +0000 (0:00:00.154) 0:15:32.454 ********** 2026-04-06 05:23:19.644310 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:19.644322 | orchestrator | 2026-04-06 05:23:19.644333 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:23:19.644344 | orchestrator | Monday 06 April 2026 05:23:04 +0000 (0:00:01.479) 0:15:33.933 ********** 2026-04-06 05:23:19.644355 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:19.644366 | orchestrator | 2026-04-06 05:23:19.644377 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:23:19.644387 | orchestrator | Monday 06 April 2026 05:23:04 +0000 (0:00:00.161) 0:15:34.095 ********** 2026-04-06 05:23:19.644398 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-06 05:23:19.644436 | orchestrator | 2026-04-06 05:23:19.644448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:23:19.644459 | orchestrator | Monday 06 April 2026 05:23:04 +0000 (0:00:00.233) 0:15:34.328 ********** 2026-04-06 05:23:19.644469 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644480 | orchestrator | 2026-04-06 05:23:19.644491 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:23:19.644502 | orchestrator | Monday 06 April 2026 05:23:04 +0000 (0:00:00.148) 0:15:34.477 ********** 2026-04-06 05:23:19.644512 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644524 | orchestrator | 2026-04-06 05:23:19.644536 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:23:19.644549 | orchestrator | Monday 06 April 2026 05:23:04 +0000 (0:00:00.151) 0:15:34.628 ********** 2026-04-06 05:23:19.644562 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644574 | orchestrator | 2026-04-06 05:23:19.644586 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:23:19.644599 | orchestrator | Monday 06 April 2026 05:23:05 +0000 (0:00:00.171) 0:15:34.800 ********** 2026-04-06 05:23:19.644611 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644623 | orchestrator | 2026-04-06 05:23:19.644634 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:23:19.644645 | orchestrator | Monday 06 April 2026 05:23:05 +0000 (0:00:00.130) 0:15:34.930 ********** 2026-04-06 05:23:19.644655 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644666 | orchestrator | 2026-04-06 05:23:19.644677 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:23:19.644695 | orchestrator | Monday 06 April 2026 05:23:05 +0000 (0:00:00.162) 0:15:35.092 ********** 2026-04-06 05:23:19.644713 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644733 | orchestrator | 2026-04-06 05:23:19.644769 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:23:19.644790 | orchestrator | Monday 06 April 2026 05:23:05 +0000 (0:00:00.140) 0:15:35.233 ********** 2026-04-06 05:23:19.644808 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644828 | orchestrator | 2026-04-06 05:23:19.644847 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:23:19.644866 | orchestrator | Monday 06 April 2026 05:23:05 +0000 (0:00:00.440) 0:15:35.674 ********** 2026-04-06 05:23:19.644885 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.644905 | orchestrator | 2026-04-06 05:23:19.644919 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:23:19.644930 | orchestrator | Monday 06 April 2026 05:23:06 +0000 (0:00:00.143) 0:15:35.817 ********** 2026-04-06 05:23:19.644940 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:19.644951 | orchestrator | 2026-04-06 05:23:19.644962 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:23:19.644972 | orchestrator | Monday 06 April 2026 05:23:06 +0000 (0:00:00.213) 0:15:36.031 ********** 2026-04-06 05:23:19.644983 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-06 05:23:19.644995 | orchestrator | 2026-04-06 05:23:19.645006 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:23:19.645017 | orchestrator | Monday 06 April 2026 05:23:06 +0000 (0:00:00.206) 0:15:36.237 ********** 2026-04-06 05:23:19.645028 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-06 05:23:19.645039 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-06 05:23:19.645050 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-06 05:23:19.645061 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-06 05:23:19.645072 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-06 05:23:19.645082 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-06 05:23:19.645093 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-06 05:23:19.645103 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:23:19.645154 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:23:19.645187 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:23:19.645199 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:23:19.645209 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:23:19.645220 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:23:19.645231 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:23:19.645242 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-06 05:23:19.645252 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-06 05:23:19.645263 | orchestrator | 2026-04-06 05:23:19.645274 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:23:19.645285 | orchestrator | Monday 06 April 2026 05:23:12 +0000 (0:00:05.665) 0:15:41.903 ********** 2026-04-06 05:23:19.645295 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-06 05:23:19.645306 | orchestrator | 2026-04-06 05:23:19.645317 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-06 05:23:19.645328 | orchestrator | Monday 06 April 2026 05:23:12 +0000 (0:00:00.581) 0:15:42.484 ********** 2026-04-06 05:23:19.645338 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:23:19.645350 | orchestrator | 2026-04-06 05:23:19.645361 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-06 05:23:19.645372 | orchestrator | Monday 06 April 2026 05:23:13 +0000 (0:00:00.497) 0:15:42.982 ********** 2026-04-06 05:23:19.645383 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:23:19.645394 | orchestrator | 2026-04-06 05:23:19.645404 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:23:19.645415 | orchestrator | Monday 06 April 2026 05:23:14 +0000 (0:00:00.958) 0:15:43.941 ********** 2026-04-06 05:23:19.645426 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.645436 | orchestrator | 2026-04-06 05:23:19.645447 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:23:19.645458 | orchestrator | Monday 06 April 2026 05:23:14 +0000 (0:00:00.125) 0:15:44.066 ********** 2026-04-06 05:23:19.645468 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.645479 | orchestrator | 2026-04-06 05:23:19.645489 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:23:19.645500 | orchestrator | Monday 06 April 2026 05:23:14 +0000 (0:00:00.146) 0:15:44.212 ********** 2026-04-06 05:23:19.645510 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.645521 | orchestrator | 2026-04-06 05:23:19.645532 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:23:19.645542 | orchestrator | Monday 06 April 2026 05:23:14 +0000 (0:00:00.453) 0:15:44.666 ********** 2026-04-06 05:23:19.645553 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.645563 | orchestrator | 2026-04-06 05:23:19.645574 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:23:19.645585 | orchestrator | Monday 06 April 2026 05:23:15 +0000 (0:00:00.132) 0:15:44.798 ********** 2026-04-06 05:23:19.645595 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.645606 | orchestrator | 2026-04-06 05:23:19.645616 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:23:19.645627 | orchestrator | Monday 06 April 2026 05:23:15 +0000 (0:00:00.148) 0:15:44.947 ********** 2026-04-06 05:23:19.645645 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.645660 | orchestrator | 2026-04-06 05:23:19.645678 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:23:19.645705 | orchestrator | Monday 06 April 2026 05:23:15 +0000 (0:00:00.145) 0:15:45.093 ********** 2026-04-06 05:23:19.645724 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.645741 | orchestrator | 2026-04-06 05:23:19.645759 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:23:19.645776 | orchestrator | Monday 06 April 2026 05:23:15 +0000 (0:00:00.140) 0:15:45.233 ********** 2026-04-06 05:23:19.645794 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.645813 | orchestrator | 2026-04-06 05:23:19.645830 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:23:19.645848 | orchestrator | Monday 06 April 2026 05:23:15 +0000 (0:00:00.136) 0:15:45.370 ********** 2026-04-06 05:23:19.645867 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.645885 | orchestrator | 2026-04-06 05:23:19.645903 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:23:19.645921 | orchestrator | Monday 06 April 2026 05:23:15 +0000 (0:00:00.149) 0:15:45.519 ********** 2026-04-06 05:23:19.645940 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:19.645960 | orchestrator | 2026-04-06 05:23:19.645977 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:23:19.645996 | orchestrator | Monday 06 April 2026 05:23:15 +0000 (0:00:00.124) 0:15:45.643 ********** 2026-04-06 05:23:19.646014 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:19.646151 | orchestrator | 2026-04-06 05:23:19.646173 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:23:19.646191 | orchestrator | Monday 06 April 2026 05:23:16 +0000 (0:00:00.215) 0:15:45.859 ********** 2026-04-06 05:23:19.646208 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:23:19.646228 | orchestrator | 2026-04-06 05:23:19.646247 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:23:19.646265 | orchestrator | Monday 06 April 2026 05:23:19 +0000 (0:00:03.402) 0:15:49.261 ********** 2026-04-06 05:23:19.646301 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:23:41.470101 | orchestrator | 2026-04-06 05:23:41.470241 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:23:41.470259 | orchestrator | Monday 06 April 2026 05:23:19 +0000 (0:00:00.176) 0:15:49.437 ********** 2026-04-06 05:23:41.470275 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-06 05:23:41.470289 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-06 05:23:41.470302 | orchestrator | 2026-04-06 05:23:41.470314 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:23:41.470325 | orchestrator | Monday 06 April 2026 05:23:26 +0000 (0:00:06.698) 0:15:56.136 ********** 2026-04-06 05:23:41.470336 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.470349 | orchestrator | 2026-04-06 05:23:41.470360 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:23:41.470371 | orchestrator | Monday 06 April 2026 05:23:26 +0000 (0:00:00.135) 0:15:56.271 ********** 2026-04-06 05:23:41.470382 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.470393 | orchestrator | 2026-04-06 05:23:41.470405 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:23:41.470443 | orchestrator | Monday 06 April 2026 05:23:26 +0000 (0:00:00.132) 0:15:56.404 ********** 2026-04-06 05:23:41.470455 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.470466 | orchestrator | 2026-04-06 05:23:41.470477 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:23:41.470488 | orchestrator | Monday 06 April 2026 05:23:27 +0000 (0:00:00.476) 0:15:56.881 ********** 2026-04-06 05:23:41.470499 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.470511 | orchestrator | 2026-04-06 05:23:41.470522 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:23:41.470533 | orchestrator | Monday 06 April 2026 05:23:27 +0000 (0:00:00.156) 0:15:57.037 ********** 2026-04-06 05:23:41.470544 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.470555 | orchestrator | 2026-04-06 05:23:41.470568 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:23:41.470581 | orchestrator | Monday 06 April 2026 05:23:27 +0000 (0:00:00.163) 0:15:57.201 ********** 2026-04-06 05:23:41.470594 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:41.470607 | orchestrator | 2026-04-06 05:23:41.470620 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:23:41.470633 | orchestrator | Monday 06 April 2026 05:23:27 +0000 (0:00:00.245) 0:15:57.446 ********** 2026-04-06 05:23:41.470646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:23:41.470658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:23:41.470672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:23:41.470698 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.470712 | orchestrator | 2026-04-06 05:23:41.470725 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:23:41.470738 | orchestrator | Monday 06 April 2026 05:23:28 +0000 (0:00:00.428) 0:15:57.875 ********** 2026-04-06 05:23:41.470750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:23:41.470763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:23:41.470775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:23:41.470788 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.470800 | orchestrator | 2026-04-06 05:23:41.470813 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:23:41.470826 | orchestrator | Monday 06 April 2026 05:23:28 +0000 (0:00:00.425) 0:15:58.301 ********** 2026-04-06 05:23:41.470838 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:23:41.470849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:23:41.470860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:23:41.470871 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.470882 | orchestrator | 2026-04-06 05:23:41.470892 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:23:41.470903 | orchestrator | Monday 06 April 2026 05:23:29 +0000 (0:00:00.432) 0:15:58.734 ********** 2026-04-06 05:23:41.470915 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:41.470926 | orchestrator | 2026-04-06 05:23:41.470936 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:23:41.470947 | orchestrator | Monday 06 April 2026 05:23:29 +0000 (0:00:00.175) 0:15:58.909 ********** 2026-04-06 05:23:41.470958 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-06 05:23:41.470969 | orchestrator | 2026-04-06 05:23:41.470980 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:23:41.470991 | orchestrator | Monday 06 April 2026 05:23:29 +0000 (0:00:00.449) 0:15:59.358 ********** 2026-04-06 05:23:41.471002 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:23:41.471013 | orchestrator | 2026-04-06 05:23:41.471024 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-06 05:23:41.471035 | orchestrator | Monday 06 April 2026 05:23:30 +0000 (0:00:00.836) 0:16:00.195 ********** 2026-04-06 05:23:41.471056 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:41.471067 | orchestrator | 2026-04-06 05:23:41.471095 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:23:41.471107 | orchestrator | Monday 06 April 2026 05:23:30 +0000 (0:00:00.144) 0:16:00.340 ********** 2026-04-06 05:23:41.471118 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:23:41.471154 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:23:41.471167 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:23:41.471177 | orchestrator | 2026-04-06 05:23:41.471188 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-06 05:23:41.471199 | orchestrator | Monday 06 April 2026 05:23:31 +0000 (0:00:01.356) 0:16:01.697 ********** 2026-04-06 05:23:41.471210 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-04-06 05:23:41.471220 | orchestrator | 2026-04-06 05:23:41.471231 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-06 05:23:41.471242 | orchestrator | Monday 06 April 2026 05:23:32 +0000 (0:00:00.562) 0:16:02.259 ********** 2026-04-06 05:23:41.471253 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.471264 | orchestrator | 2026-04-06 05:23:41.471274 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-06 05:23:41.471285 | orchestrator | Monday 06 April 2026 05:23:32 +0000 (0:00:00.152) 0:16:02.412 ********** 2026-04-06 05:23:41.471296 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.471306 | orchestrator | 2026-04-06 05:23:41.471317 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-06 05:23:41.471328 | orchestrator | Monday 06 April 2026 05:23:32 +0000 (0:00:00.136) 0:16:02.549 ********** 2026-04-06 05:23:41.471339 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:41.471349 | orchestrator | 2026-04-06 05:23:41.471360 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-06 05:23:41.471371 | orchestrator | Monday 06 April 2026 05:23:33 +0000 (0:00:00.447) 0:16:02.997 ********** 2026-04-06 05:23:41.471381 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:41.471392 | orchestrator | 2026-04-06 05:23:41.471403 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-06 05:23:41.471414 | orchestrator | Monday 06 April 2026 05:23:33 +0000 (0:00:00.148) 0:16:03.146 ********** 2026-04-06 05:23:41.471424 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-06 05:23:41.471435 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-06 05:23:41.471446 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-06 05:23:41.471457 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-06 05:23:41.471468 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-06 05:23:41.471478 | orchestrator | 2026-04-06 05:23:41.471489 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-06 05:23:41.471500 | orchestrator | Monday 06 April 2026 05:23:35 +0000 (0:00:02.007) 0:16:05.154 ********** 2026-04-06 05:23:41.471510 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.471521 | orchestrator | 2026-04-06 05:23:41.471532 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-06 05:23:41.471543 | orchestrator | Monday 06 April 2026 05:23:35 +0000 (0:00:00.126) 0:16:05.280 ********** 2026-04-06 05:23:41.471560 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-04-06 05:23:41.471571 | orchestrator | 2026-04-06 05:23:41.471582 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-06 05:23:41.471593 | orchestrator | Monday 06 April 2026 05:23:36 +0000 (0:00:00.589) 0:16:05.870 ********** 2026-04-06 05:23:41.471611 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-06 05:23:41.471622 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-06 05:23:41.471633 | orchestrator | 2026-04-06 05:23:41.471643 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-06 05:23:41.471654 | orchestrator | Monday 06 April 2026 05:23:36 +0000 (0:00:00.846) 0:16:06.716 ********** 2026-04-06 05:23:41.471665 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:23:41.471676 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 05:23:41.471687 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:23:41.471698 | orchestrator | 2026-04-06 05:23:41.471709 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:23:41.471720 | orchestrator | Monday 06 April 2026 05:23:39 +0000 (0:00:02.697) 0:16:09.413 ********** 2026-04-06 05:23:41.471730 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-06 05:23:41.471741 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 05:23:41.471752 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:23:41.471763 | orchestrator | 2026-04-06 05:23:41.471773 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-06 05:23:41.471784 | orchestrator | Monday 06 April 2026 05:23:40 +0000 (0:00:01.265) 0:16:10.679 ********** 2026-04-06 05:23:41.471794 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.471805 | orchestrator | 2026-04-06 05:23:41.471816 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-06 05:23:41.471827 | orchestrator | Monday 06 April 2026 05:23:41 +0000 (0:00:00.234) 0:16:10.913 ********** 2026-04-06 05:23:41.471837 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.471848 | orchestrator | 2026-04-06 05:23:41.471859 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-06 05:23:41.471869 | orchestrator | Monday 06 April 2026 05:23:41 +0000 (0:00:00.135) 0:16:11.049 ********** 2026-04-06 05:23:41.471880 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:23:41.471891 | orchestrator | 2026-04-06 05:23:41.471908 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-06 05:24:17.738758 | orchestrator | Monday 06 April 2026 05:23:41 +0000 (0:00:00.124) 0:16:11.174 ********** 2026-04-06 05:24:17.738885 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-04-06 05:24:17.738905 | orchestrator | 2026-04-06 05:24:17.738918 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-06 05:24:17.738929 | orchestrator | Monday 06 April 2026 05:23:42 +0000 (0:00:00.642) 0:16:11.817 ********** 2026-04-06 05:24:17.738940 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:24:17.738952 | orchestrator | 2026-04-06 05:24:17.738963 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-06 05:24:17.738974 | orchestrator | Monday 06 April 2026 05:23:42 +0000 (0:00:00.472) 0:16:12.290 ********** 2026-04-06 05:24:17.738984 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:24:17.738995 | orchestrator | 2026-04-06 05:24:17.739006 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-06 05:24:17.739017 | orchestrator | Monday 06 April 2026 05:23:45 +0000 (0:00:02.506) 0:16:14.796 ********** 2026-04-06 05:24:17.739028 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-04-06 05:24:17.739039 | orchestrator | 2026-04-06 05:24:17.739050 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-06 05:24:17.739060 | orchestrator | Monday 06 April 2026 05:23:45 +0000 (0:00:00.594) 0:16:15.391 ********** 2026-04-06 05:24:17.739071 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:24:17.739082 | orchestrator | 2026-04-06 05:24:17.739092 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-06 05:24:17.739103 | orchestrator | Monday 06 April 2026 05:23:46 +0000 (0:00:00.988) 0:16:16.379 ********** 2026-04-06 05:24:17.739114 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:24:17.739195 | orchestrator | 2026-04-06 05:24:17.739208 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-06 05:24:17.739219 | orchestrator | Monday 06 April 2026 05:23:47 +0000 (0:00:00.919) 0:16:17.299 ********** 2026-04-06 05:24:17.739230 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:24:17.739240 | orchestrator | 2026-04-06 05:24:17.739251 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-06 05:24:17.739262 | orchestrator | Monday 06 April 2026 05:23:48 +0000 (0:00:01.260) 0:16:18.559 ********** 2026-04-06 05:24:17.739273 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.739284 | orchestrator | 2026-04-06 05:24:17.739294 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-06 05:24:17.739306 | orchestrator | Monday 06 April 2026 05:23:49 +0000 (0:00:00.406) 0:16:18.965 ********** 2026-04-06 05:24:17.739319 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.739332 | orchestrator | 2026-04-06 05:24:17.739345 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-06 05:24:17.739357 | orchestrator | Monday 06 April 2026 05:23:49 +0000 (0:00:00.161) 0:16:19.127 ********** 2026-04-06 05:24:17.739369 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-06 05:24:17.739382 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-06 05:24:17.739394 | orchestrator | 2026-04-06 05:24:17.739406 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-06 05:24:17.739420 | orchestrator | Monday 06 April 2026 05:23:50 +0000 (0:00:00.856) 0:16:19.983 ********** 2026-04-06 05:24:17.739432 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-06 05:24:17.739444 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-06 05:24:17.739457 | orchestrator | 2026-04-06 05:24:17.739468 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-06 05:24:17.739495 | orchestrator | Monday 06 April 2026 05:23:52 +0000 (0:00:01.938) 0:16:21.921 ********** 2026-04-06 05:24:17.739508 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-06 05:24:17.739521 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-06 05:24:17.739533 | orchestrator | 2026-04-06 05:24:17.739546 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-06 05:24:17.739559 | orchestrator | Monday 06 April 2026 05:23:55 +0000 (0:00:03.534) 0:16:25.456 ********** 2026-04-06 05:24:17.739573 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.739586 | orchestrator | 2026-04-06 05:24:17.739598 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-06 05:24:17.739610 | orchestrator | Monday 06 April 2026 05:23:55 +0000 (0:00:00.242) 0:16:25.698 ********** 2026-04-06 05:24:17.739623 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.739635 | orchestrator | 2026-04-06 05:24:17.739648 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-06 05:24:17.739659 | orchestrator | Monday 06 April 2026 05:23:56 +0000 (0:00:00.239) 0:16:25.937 ********** 2026-04-06 05:24:17.739670 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.739680 | orchestrator | 2026-04-06 05:24:17.739691 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-06 05:24:17.739702 | orchestrator | Monday 06 April 2026 05:23:56 +0000 (0:00:00.308) 0:16:26.246 ********** 2026-04-06 05:24:17.739712 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.739723 | orchestrator | 2026-04-06 05:24:17.739733 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-06 05:24:17.739744 | orchestrator | Monday 06 April 2026 05:23:56 +0000 (0:00:00.119) 0:16:26.366 ********** 2026-04-06 05:24:17.739755 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.739765 | orchestrator | 2026-04-06 05:24:17.739776 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-06 05:24:17.739786 | orchestrator | Monday 06 April 2026 05:23:56 +0000 (0:00:00.128) 0:16:26.495 ********** 2026-04-06 05:24:17.739797 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-06 05:24:17.739817 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-06 05:24:17.739828 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-04-06 05:24:17.739854 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-04-06 05:24:17.739866 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-04-06 05:24:17.739877 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:24:17.739887 | orchestrator | 2026-04-06 05:24:17.739898 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 05:24:17.739908 | orchestrator | Monday 06 April 2026 05:24:13 +0000 (0:00:16.264) 0:16:42.759 ********** 2026-04-06 05:24:17.739919 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.739930 | orchestrator | 2026-04-06 05:24:17.739941 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-06 05:24:17.739952 | orchestrator | Monday 06 April 2026 05:24:13 +0000 (0:00:00.424) 0:16:43.184 ********** 2026-04-06 05:24:17.739962 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.739973 | orchestrator | 2026-04-06 05:24:17.739983 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-06 05:24:17.739994 | orchestrator | Monday 06 April 2026 05:24:13 +0000 (0:00:00.141) 0:16:43.325 ********** 2026-04-06 05:24:17.740005 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.740016 | orchestrator | 2026-04-06 05:24:17.740026 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-06 05:24:17.740037 | orchestrator | Monday 06 April 2026 05:24:13 +0000 (0:00:00.123) 0:16:43.449 ********** 2026-04-06 05:24:17.740047 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.740058 | orchestrator | 2026-04-06 05:24:17.740069 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-06 05:24:17.740079 | orchestrator | Monday 06 April 2026 05:24:13 +0000 (0:00:00.131) 0:16:43.580 ********** 2026-04-06 05:24:17.740090 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.740100 | orchestrator | 2026-04-06 05:24:17.740111 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-06 05:24:17.740122 | orchestrator | Monday 06 April 2026 05:24:13 +0000 (0:00:00.125) 0:16:43.706 ********** 2026-04-06 05:24:17.740132 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.740143 | orchestrator | 2026-04-06 05:24:17.740173 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-06 05:24:17.740184 | orchestrator | Monday 06 April 2026 05:24:14 +0000 (0:00:00.135) 0:16:43.841 ********** 2026-04-06 05:24:17.740195 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:24:17.740205 | orchestrator | 2026-04-06 05:24:17.740216 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-06 05:24:17.740227 | orchestrator | 2026-04-06 05:24:17.740238 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:24:17.740248 | orchestrator | Monday 06 April 2026 05:24:14 +0000 (0:00:00.578) 0:16:44.420 ********** 2026-04-06 05:24:17.740259 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-06 05:24:17.740270 | orchestrator | 2026-04-06 05:24:17.740281 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:24:17.740291 | orchestrator | Monday 06 April 2026 05:24:14 +0000 (0:00:00.244) 0:16:44.664 ********** 2026-04-06 05:24:17.740478 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:17.740494 | orchestrator | 2026-04-06 05:24:17.740505 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:24:17.740516 | orchestrator | Monday 06 April 2026 05:24:15 +0000 (0:00:00.457) 0:16:45.122 ********** 2026-04-06 05:24:17.740526 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:17.740537 | orchestrator | 2026-04-06 05:24:17.740564 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:24:17.740575 | orchestrator | Monday 06 April 2026 05:24:15 +0000 (0:00:00.135) 0:16:45.258 ********** 2026-04-06 05:24:17.740586 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:17.740597 | orchestrator | 2026-04-06 05:24:17.740607 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:24:17.740618 | orchestrator | Monday 06 April 2026 05:24:15 +0000 (0:00:00.438) 0:16:45.697 ********** 2026-04-06 05:24:17.740629 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:17.740639 | orchestrator | 2026-04-06 05:24:17.740650 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:24:17.740661 | orchestrator | Monday 06 April 2026 05:24:16 +0000 (0:00:00.446) 0:16:46.143 ********** 2026-04-06 05:24:17.740672 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:17.740683 | orchestrator | 2026-04-06 05:24:17.740694 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:24:17.740705 | orchestrator | Monday 06 April 2026 05:24:16 +0000 (0:00:00.137) 0:16:46.280 ********** 2026-04-06 05:24:17.740715 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:17.740726 | orchestrator | 2026-04-06 05:24:17.740737 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:24:17.740748 | orchestrator | Monday 06 April 2026 05:24:16 +0000 (0:00:00.164) 0:16:46.445 ********** 2026-04-06 05:24:17.740759 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:17.740769 | orchestrator | 2026-04-06 05:24:17.740780 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:24:17.740791 | orchestrator | Monday 06 April 2026 05:24:16 +0000 (0:00:00.152) 0:16:46.597 ********** 2026-04-06 05:24:17.740801 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:17.740812 | orchestrator | 2026-04-06 05:24:17.740823 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:24:17.740833 | orchestrator | Monday 06 April 2026 05:24:17 +0000 (0:00:00.163) 0:16:46.761 ********** 2026-04-06 05:24:17.740844 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:24:17.740855 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:24:17.740865 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:24:17.740876 | orchestrator | 2026-04-06 05:24:17.740887 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:24:17.740906 | orchestrator | Monday 06 April 2026 05:24:17 +0000 (0:00:00.685) 0:16:47.446 ********** 2026-04-06 05:24:25.305312 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:25.305419 | orchestrator | 2026-04-06 05:24:25.305436 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:24:25.305449 | orchestrator | Monday 06 April 2026 05:24:17 +0000 (0:00:00.268) 0:16:47.715 ********** 2026-04-06 05:24:25.305460 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:24:25.305471 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:24:25.305482 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:24:25.305493 | orchestrator | 2026-04-06 05:24:25.305504 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:24:25.305515 | orchestrator | Monday 06 April 2026 05:24:19 +0000 (0:00:01.862) 0:16:49.577 ********** 2026-04-06 05:24:25.305526 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-06 05:24:25.305537 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-06 05:24:25.305548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-06 05:24:25.305560 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:25.305571 | orchestrator | 2026-04-06 05:24:25.305582 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:24:25.305618 | orchestrator | Monday 06 April 2026 05:24:20 +0000 (0:00:00.427) 0:16:50.005 ********** 2026-04-06 05:24:25.305631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:24:25.305645 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:24:25.305656 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:24:25.305667 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:25.305678 | orchestrator | 2026-04-06 05:24:25.305689 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:24:25.305700 | orchestrator | Monday 06 April 2026 05:24:21 +0000 (0:00:00.971) 0:16:50.977 ********** 2026-04-06 05:24:25.305727 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:25.305740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:25.305752 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:25.305763 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:25.305773 | orchestrator | 2026-04-06 05:24:25.305784 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:24:25.305795 | orchestrator | Monday 06 April 2026 05:24:21 +0000 (0:00:00.162) 0:16:51.140 ********** 2026-04-06 05:24:25.305825 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:24:18.508425', 'end': '2026-04-06 05:24:18.569009', 'delta': '0:00:00.060584', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:24:25.305842 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:24:19.093386', 'end': '2026-04-06 05:24:19.142111', 'delta': '0:00:00.048725', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:24:25.305865 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:24:19.665539', 'end': '2026-04-06 05:24:19.720512', 'delta': '0:00:00.054973', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:24:25.305878 | orchestrator | 2026-04-06 05:24:25.305890 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:24:25.305903 | orchestrator | Monday 06 April 2026 05:24:21 +0000 (0:00:00.215) 0:16:51.355 ********** 2026-04-06 05:24:25.305915 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:25.305928 | orchestrator | 2026-04-06 05:24:25.305940 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:24:25.305953 | orchestrator | Monday 06 April 2026 05:24:21 +0000 (0:00:00.257) 0:16:51.613 ********** 2026-04-06 05:24:25.305965 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:25.305978 | orchestrator | 2026-04-06 05:24:25.305991 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:24:25.306010 | orchestrator | Monday 06 April 2026 05:24:22 +0000 (0:00:00.624) 0:16:52.238 ********** 2026-04-06 05:24:25.306105 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:25.306125 | orchestrator | 2026-04-06 05:24:25.306143 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:24:25.306199 | orchestrator | Monday 06 April 2026 05:24:22 +0000 (0:00:00.474) 0:16:52.712 ********** 2026-04-06 05:24:25.306217 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:24:25.306235 | orchestrator | 2026-04-06 05:24:25.306253 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:24:25.306272 | orchestrator | Monday 06 April 2026 05:24:24 +0000 (0:00:01.053) 0:16:53.766 ********** 2026-04-06 05:24:25.306292 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:25.306312 | orchestrator | 2026-04-06 05:24:25.306331 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:24:25.306345 | orchestrator | Monday 06 April 2026 05:24:24 +0000 (0:00:00.153) 0:16:53.920 ********** 2026-04-06 05:24:25.306355 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:25.306366 | orchestrator | 2026-04-06 05:24:25.306377 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:24:25.306387 | orchestrator | Monday 06 April 2026 05:24:24 +0000 (0:00:00.149) 0:16:54.069 ********** 2026-04-06 05:24:25.306398 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:25.306408 | orchestrator | 2026-04-06 05:24:25.306419 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:24:25.306429 | orchestrator | Monday 06 April 2026 05:24:24 +0000 (0:00:00.239) 0:16:54.309 ********** 2026-04-06 05:24:25.306440 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:25.306451 | orchestrator | 2026-04-06 05:24:25.306462 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:24:25.306473 | orchestrator | Monday 06 April 2026 05:24:24 +0000 (0:00:00.131) 0:16:54.440 ********** 2026-04-06 05:24:25.306483 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:25.306494 | orchestrator | 2026-04-06 05:24:25.306515 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:24:25.306526 | orchestrator | Monday 06 April 2026 05:24:24 +0000 (0:00:00.127) 0:16:54.568 ********** 2026-04-06 05:24:25.306536 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:25.306547 | orchestrator | 2026-04-06 05:24:25.306557 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:24:25.306568 | orchestrator | Monday 06 April 2026 05:24:25 +0000 (0:00:00.172) 0:16:54.740 ********** 2026-04-06 05:24:25.306578 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:25.306589 | orchestrator | 2026-04-06 05:24:25.306606 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:24:25.306633 | orchestrator | Monday 06 April 2026 05:24:25 +0000 (0:00:00.117) 0:16:54.858 ********** 2026-04-06 05:24:25.306652 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:25.306669 | orchestrator | 2026-04-06 05:24:25.306687 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:24:25.306716 | orchestrator | Monday 06 April 2026 05:24:25 +0000 (0:00:00.161) 0:16:55.020 ********** 2026-04-06 05:24:25.812182 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:25.812290 | orchestrator | 2026-04-06 05:24:25.812306 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:24:25.812320 | orchestrator | Monday 06 April 2026 05:24:25 +0000 (0:00:00.121) 0:16:55.141 ********** 2026-04-06 05:24:25.812332 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:25.812344 | orchestrator | 2026-04-06 05:24:25.812355 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:24:25.812366 | orchestrator | Monday 06 April 2026 05:24:25 +0000 (0:00:00.164) 0:16:55.306 ********** 2026-04-06 05:24:25.812379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:24:25.812396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'uuids': ['83378823-14d2-4928-9007-67488abc99a7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp']}})  2026-04-06 05:24:25.812411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a868051', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:24:25.812441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3']}})  2026-04-06 05:24:25.812485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:24:25.812498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:24:25.812557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:24:25.812572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:24:25.812584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO', 'dm-uuid-CRYPT-LUKS2-dd6ed06a0d554d6181a429bf5c5222d7-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:24:25.812596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:24:25.812608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'uuids': ['dd6ed06a-0d55-4d61-81a4-29bf5c5222d7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO']}})  2026-04-06 05:24:25.812628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a']}})  2026-04-06 05:24:25.812639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:24:25.812704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '40f67feb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:24:26.513040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:24:26.513220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:24:26.513275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp', 'dm-uuid-CRYPT-LUKS2-8337882314d24928900767488abc99a7-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:24:26.513297 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:26.513311 | orchestrator | 2026-04-06 05:24:26.513323 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:24:26.513335 | orchestrator | Monday 06 April 2026 05:24:25 +0000 (0:00:00.344) 0:16:55.651 ********** 2026-04-06 05:24:26.513350 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:26.513369 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'uuids': ['83378823-14d2-4928-9007-67488abc99a7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:26.513383 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a868051', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:26.513419 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:26.513449 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:26.513462 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:26.513474 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:26.513486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:26.513503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO', 'dm-uuid-CRYPT-LUKS2-dd6ed06a0d554d6181a429bf5c5222d7-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:28.184647 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:28.184751 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'uuids': ['dd6ed06a-0d55-4d61-81a4-29bf5c5222d7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:28.184769 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:28.184804 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:28.184857 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '40f67feb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:28.184897 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:28.184910 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:28.184922 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp', 'dm-uuid-CRYPT-LUKS2-8337882314d24928900767488abc99a7-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:24:28.184935 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:28.184948 | orchestrator | 2026-04-06 05:24:28.184961 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:24:28.184973 | orchestrator | Monday 06 April 2026 05:24:27 +0000 (0:00:01.125) 0:16:56.776 ********** 2026-04-06 05:24:28.184984 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:28.184996 | orchestrator | 2026-04-06 05:24:28.185008 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:24:28.185019 | orchestrator | Monday 06 April 2026 05:24:27 +0000 (0:00:00.490) 0:16:57.266 ********** 2026-04-06 05:24:28.185036 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:28.185047 | orchestrator | 2026-04-06 05:24:28.185058 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:24:28.185069 | orchestrator | Monday 06 April 2026 05:24:27 +0000 (0:00:00.139) 0:16:57.406 ********** 2026-04-06 05:24:28.185080 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:28.185091 | orchestrator | 2026-04-06 05:24:28.185101 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:24:28.185120 | orchestrator | Monday 06 April 2026 05:24:28 +0000 (0:00:00.491) 0:16:57.897 ********** 2026-04-06 05:24:42.685999 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.686212 | orchestrator | 2026-04-06 05:24:42.686239 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:24:42.686253 | orchestrator | Monday 06 April 2026 05:24:28 +0000 (0:00:00.145) 0:16:58.043 ********** 2026-04-06 05:24:42.686280 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.686292 | orchestrator | 2026-04-06 05:24:42.686303 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:24:42.686315 | orchestrator | Monday 06 April 2026 05:24:28 +0000 (0:00:00.261) 0:16:58.305 ********** 2026-04-06 05:24:42.686352 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.686364 | orchestrator | 2026-04-06 05:24:42.686375 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:24:42.686387 | orchestrator | Monday 06 April 2026 05:24:28 +0000 (0:00:00.170) 0:16:58.475 ********** 2026-04-06 05:24:42.686398 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-06 05:24:42.686410 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-06 05:24:42.686421 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-06 05:24:42.686432 | orchestrator | 2026-04-06 05:24:42.686443 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:24:42.686454 | orchestrator | Monday 06 April 2026 05:24:29 +0000 (0:00:01.007) 0:16:59.483 ********** 2026-04-06 05:24:42.686465 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-06 05:24:42.686476 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-06 05:24:42.686487 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-06 05:24:42.686498 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.686509 | orchestrator | 2026-04-06 05:24:42.686520 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:24:42.686531 | orchestrator | Monday 06 April 2026 05:24:29 +0000 (0:00:00.162) 0:16:59.645 ********** 2026-04-06 05:24:42.686543 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-06 05:24:42.686557 | orchestrator | 2026-04-06 05:24:42.686571 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:24:42.686585 | orchestrator | Monday 06 April 2026 05:24:30 +0000 (0:00:00.244) 0:16:59.890 ********** 2026-04-06 05:24:42.686598 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.686610 | orchestrator | 2026-04-06 05:24:42.686623 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:24:42.686636 | orchestrator | Monday 06 April 2026 05:24:30 +0000 (0:00:00.130) 0:17:00.020 ********** 2026-04-06 05:24:42.686649 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.686662 | orchestrator | 2026-04-06 05:24:42.686675 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:24:42.686688 | orchestrator | Monday 06 April 2026 05:24:30 +0000 (0:00:00.171) 0:17:00.191 ********** 2026-04-06 05:24:42.686701 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.686713 | orchestrator | 2026-04-06 05:24:42.686725 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:24:42.686738 | orchestrator | Monday 06 April 2026 05:24:30 +0000 (0:00:00.432) 0:17:00.624 ********** 2026-04-06 05:24:42.686776 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:42.686790 | orchestrator | 2026-04-06 05:24:42.686804 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:24:42.686817 | orchestrator | Monday 06 April 2026 05:24:31 +0000 (0:00:00.241) 0:17:00.865 ********** 2026-04-06 05:24:42.686829 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:24:42.686842 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:24:42.686855 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:24:42.686867 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.686879 | orchestrator | 2026-04-06 05:24:42.686893 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:24:42.686905 | orchestrator | Monday 06 April 2026 05:24:31 +0000 (0:00:00.420) 0:17:01.286 ********** 2026-04-06 05:24:42.686917 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:24:42.686928 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:24:42.686939 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:24:42.686950 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.686961 | orchestrator | 2026-04-06 05:24:42.686972 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:24:42.686983 | orchestrator | Monday 06 April 2026 05:24:31 +0000 (0:00:00.410) 0:17:01.696 ********** 2026-04-06 05:24:42.686994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:24:42.687005 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:24:42.687016 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:24:42.687027 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.687038 | orchestrator | 2026-04-06 05:24:42.687049 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:24:42.687060 | orchestrator | Monday 06 April 2026 05:24:32 +0000 (0:00:00.414) 0:17:02.110 ********** 2026-04-06 05:24:42.687071 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:42.687082 | orchestrator | 2026-04-06 05:24:42.687093 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:24:42.687104 | orchestrator | Monday 06 April 2026 05:24:32 +0000 (0:00:00.166) 0:17:02.276 ********** 2026-04-06 05:24:42.687115 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 05:24:42.687126 | orchestrator | 2026-04-06 05:24:42.687137 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:24:42.687148 | orchestrator | Monday 06 April 2026 05:24:32 +0000 (0:00:00.340) 0:17:02.617 ********** 2026-04-06 05:24:42.687195 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:24:42.687208 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:24:42.687219 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:24:42.687230 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:24:42.687241 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-06 05:24:42.687257 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:24:42.687269 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:24:42.687279 | orchestrator | 2026-04-06 05:24:42.687290 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:24:42.687301 | orchestrator | Monday 06 April 2026 05:24:34 +0000 (0:00:01.189) 0:17:03.806 ********** 2026-04-06 05:24:42.687313 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:24:42.687331 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:24:42.687350 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:24:42.687381 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:24:42.687400 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-06 05:24:42.687418 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:24:42.687437 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:24:42.687455 | orchestrator | 2026-04-06 05:24:42.687474 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-06 05:24:42.687494 | orchestrator | Monday 06 April 2026 05:24:35 +0000 (0:00:01.711) 0:17:05.518 ********** 2026-04-06 05:24:42.687506 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:42.687516 | orchestrator | 2026-04-06 05:24:42.687527 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-06 05:24:42.687538 | orchestrator | Monday 06 April 2026 05:24:36 +0000 (0:00:00.455) 0:17:05.974 ********** 2026-04-06 05:24:42.687549 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:42.687559 | orchestrator | 2026-04-06 05:24:42.687570 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-06 05:24:42.687580 | orchestrator | Monday 06 April 2026 05:24:36 +0000 (0:00:00.146) 0:17:06.120 ********** 2026-04-06 05:24:42.687591 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:42.687602 | orchestrator | 2026-04-06 05:24:42.687613 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-06 05:24:42.687623 | orchestrator | Monday 06 April 2026 05:24:36 +0000 (0:00:00.229) 0:17:06.349 ********** 2026-04-06 05:24:42.687634 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-06 05:24:42.687645 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-06 05:24:42.687655 | orchestrator | 2026-04-06 05:24:42.687666 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:24:42.687677 | orchestrator | Monday 06 April 2026 05:24:40 +0000 (0:00:03.448) 0:17:09.798 ********** 2026-04-06 05:24:42.687687 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-06 05:24:42.687698 | orchestrator | 2026-04-06 05:24:42.687709 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:24:42.687720 | orchestrator | Monday 06 April 2026 05:24:40 +0000 (0:00:00.210) 0:17:10.008 ********** 2026-04-06 05:24:42.687731 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-06 05:24:42.687741 | orchestrator | 2026-04-06 05:24:42.687752 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:24:42.687763 | orchestrator | Monday 06 April 2026 05:24:40 +0000 (0:00:00.234) 0:17:10.242 ********** 2026-04-06 05:24:42.687774 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.687784 | orchestrator | 2026-04-06 05:24:42.687795 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:24:42.687806 | orchestrator | Monday 06 April 2026 05:24:40 +0000 (0:00:00.120) 0:17:10.363 ********** 2026-04-06 05:24:42.687817 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:42.687827 | orchestrator | 2026-04-06 05:24:42.687838 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:24:42.687849 | orchestrator | Monday 06 April 2026 05:24:41 +0000 (0:00:00.518) 0:17:10.881 ********** 2026-04-06 05:24:42.687859 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:42.687870 | orchestrator | 2026-04-06 05:24:42.687881 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:24:42.687892 | orchestrator | Monday 06 April 2026 05:24:41 +0000 (0:00:00.536) 0:17:11.418 ********** 2026-04-06 05:24:42.687902 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:42.687913 | orchestrator | 2026-04-06 05:24:42.687924 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:24:42.687934 | orchestrator | Monday 06 April 2026 05:24:42 +0000 (0:00:00.550) 0:17:11.968 ********** 2026-04-06 05:24:42.687952 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.687963 | orchestrator | 2026-04-06 05:24:42.687973 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:24:42.687984 | orchestrator | Monday 06 April 2026 05:24:42 +0000 (0:00:00.146) 0:17:12.115 ********** 2026-04-06 05:24:42.687995 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.688006 | orchestrator | 2026-04-06 05:24:42.688016 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:24:42.688027 | orchestrator | Monday 06 April 2026 05:24:42 +0000 (0:00:00.146) 0:17:12.262 ********** 2026-04-06 05:24:42.688038 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:42.688049 | orchestrator | 2026-04-06 05:24:42.688068 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:24:54.037281 | orchestrator | Monday 06 April 2026 05:24:42 +0000 (0:00:00.133) 0:17:12.396 ********** 2026-04-06 05:24:54.037425 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.037454 | orchestrator | 2026-04-06 05:24:54.037477 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:24:54.037498 | orchestrator | Monday 06 April 2026 05:24:43 +0000 (0:00:00.541) 0:17:12.937 ********** 2026-04-06 05:24:54.037516 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.037527 | orchestrator | 2026-04-06 05:24:54.037555 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:24:54.037567 | orchestrator | Monday 06 April 2026 05:24:44 +0000 (0:00:00.841) 0:17:13.778 ********** 2026-04-06 05:24:54.037578 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.037590 | orchestrator | 2026-04-06 05:24:54.037601 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:24:54.037612 | orchestrator | Monday 06 April 2026 05:24:44 +0000 (0:00:00.124) 0:17:13.903 ********** 2026-04-06 05:24:54.037623 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.037634 | orchestrator | 2026-04-06 05:24:54.037646 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:24:54.037657 | orchestrator | Monday 06 April 2026 05:24:44 +0000 (0:00:00.130) 0:17:14.033 ********** 2026-04-06 05:24:54.037667 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.037678 | orchestrator | 2026-04-06 05:24:54.037689 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:24:54.037700 | orchestrator | Monday 06 April 2026 05:24:44 +0000 (0:00:00.146) 0:17:14.180 ********** 2026-04-06 05:24:54.037711 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.037722 | orchestrator | 2026-04-06 05:24:54.037733 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:24:54.037745 | orchestrator | Monday 06 April 2026 05:24:44 +0000 (0:00:00.156) 0:17:14.336 ********** 2026-04-06 05:24:54.037756 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.037767 | orchestrator | 2026-04-06 05:24:54.037780 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:24:54.037794 | orchestrator | Monday 06 April 2026 05:24:44 +0000 (0:00:00.190) 0:17:14.527 ********** 2026-04-06 05:24:54.037807 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.037820 | orchestrator | 2026-04-06 05:24:54.037833 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:24:54.037845 | orchestrator | Monday 06 April 2026 05:24:44 +0000 (0:00:00.128) 0:17:14.655 ********** 2026-04-06 05:24:54.037858 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.037871 | orchestrator | 2026-04-06 05:24:54.037884 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:24:54.037897 | orchestrator | Monday 06 April 2026 05:24:45 +0000 (0:00:00.150) 0:17:14.806 ********** 2026-04-06 05:24:54.037910 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.037923 | orchestrator | 2026-04-06 05:24:54.037935 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:24:54.037948 | orchestrator | Monday 06 April 2026 05:24:45 +0000 (0:00:00.138) 0:17:14.944 ********** 2026-04-06 05:24:54.037990 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.038010 | orchestrator | 2026-04-06 05:24:54.038110 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:24:54.038129 | orchestrator | Monday 06 April 2026 05:24:45 +0000 (0:00:00.174) 0:17:15.119 ********** 2026-04-06 05:24:54.038147 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.038204 | orchestrator | 2026-04-06 05:24:54.038222 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:24:54.038240 | orchestrator | Monday 06 April 2026 05:24:45 +0000 (0:00:00.232) 0:17:15.352 ********** 2026-04-06 05:24:54.038259 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038278 | orchestrator | 2026-04-06 05:24:54.038296 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:24:54.038315 | orchestrator | Monday 06 April 2026 05:24:45 +0000 (0:00:00.126) 0:17:15.479 ********** 2026-04-06 05:24:54.038334 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038353 | orchestrator | 2026-04-06 05:24:54.038371 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:24:54.038389 | orchestrator | Monday 06 April 2026 05:24:46 +0000 (0:00:00.466) 0:17:15.945 ********** 2026-04-06 05:24:54.038408 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038427 | orchestrator | 2026-04-06 05:24:54.038445 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:24:54.038462 | orchestrator | Monday 06 April 2026 05:24:46 +0000 (0:00:00.131) 0:17:16.077 ********** 2026-04-06 05:24:54.038474 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038485 | orchestrator | 2026-04-06 05:24:54.038496 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:24:54.038507 | orchestrator | Monday 06 April 2026 05:24:46 +0000 (0:00:00.125) 0:17:16.202 ********** 2026-04-06 05:24:54.038517 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038528 | orchestrator | 2026-04-06 05:24:54.038539 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:24:54.038550 | orchestrator | Monday 06 April 2026 05:24:46 +0000 (0:00:00.139) 0:17:16.341 ********** 2026-04-06 05:24:54.038561 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038572 | orchestrator | 2026-04-06 05:24:54.038582 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:24:54.038599 | orchestrator | Monday 06 April 2026 05:24:46 +0000 (0:00:00.136) 0:17:16.478 ********** 2026-04-06 05:24:54.038617 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038635 | orchestrator | 2026-04-06 05:24:54.038651 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:24:54.038668 | orchestrator | Monday 06 April 2026 05:24:46 +0000 (0:00:00.134) 0:17:16.613 ********** 2026-04-06 05:24:54.038685 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038701 | orchestrator | 2026-04-06 05:24:54.038717 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:24:54.038735 | orchestrator | Monday 06 April 2026 05:24:47 +0000 (0:00:00.124) 0:17:16.737 ********** 2026-04-06 05:24:54.038780 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038800 | orchestrator | 2026-04-06 05:24:54.038816 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:24:54.038832 | orchestrator | Monday 06 April 2026 05:24:47 +0000 (0:00:00.126) 0:17:16.864 ********** 2026-04-06 05:24:54.038849 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038867 | orchestrator | 2026-04-06 05:24:54.038887 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:24:54.038917 | orchestrator | Monday 06 April 2026 05:24:47 +0000 (0:00:00.121) 0:17:16.985 ********** 2026-04-06 05:24:54.038935 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.038952 | orchestrator | 2026-04-06 05:24:54.038968 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:24:54.038988 | orchestrator | Monday 06 April 2026 05:24:47 +0000 (0:00:00.115) 0:17:17.101 ********** 2026-04-06 05:24:54.039024 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.039042 | orchestrator | 2026-04-06 05:24:54.039061 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:24:54.039077 | orchestrator | Monday 06 April 2026 05:24:47 +0000 (0:00:00.203) 0:17:17.304 ********** 2026-04-06 05:24:54.039094 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.039112 | orchestrator | 2026-04-06 05:24:54.039131 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:24:54.039149 | orchestrator | Monday 06 April 2026 05:24:48 +0000 (0:00:00.976) 0:17:18.281 ********** 2026-04-06 05:24:54.039167 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.039214 | orchestrator | 2026-04-06 05:24:54.039233 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:24:54.039252 | orchestrator | Monday 06 April 2026 05:24:49 +0000 (0:00:01.242) 0:17:19.524 ********** 2026-04-06 05:24:54.039269 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-06 05:24:54.039289 | orchestrator | 2026-04-06 05:24:54.039307 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:24:54.039325 | orchestrator | Monday 06 April 2026 05:24:50 +0000 (0:00:00.562) 0:17:20.087 ********** 2026-04-06 05:24:54.039345 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.039363 | orchestrator | 2026-04-06 05:24:54.039380 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:24:54.039392 | orchestrator | Monday 06 April 2026 05:24:50 +0000 (0:00:00.156) 0:17:20.243 ********** 2026-04-06 05:24:54.039402 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.039413 | orchestrator | 2026-04-06 05:24:54.039424 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:24:54.039442 | orchestrator | Monday 06 April 2026 05:24:50 +0000 (0:00:00.129) 0:17:20.373 ********** 2026-04-06 05:24:54.039460 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:24:54.039478 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:24:54.039495 | orchestrator | 2026-04-06 05:24:54.039511 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:24:54.039528 | orchestrator | Monday 06 April 2026 05:24:51 +0000 (0:00:00.805) 0:17:21.178 ********** 2026-04-06 05:24:54.039546 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.039562 | orchestrator | 2026-04-06 05:24:54.039578 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:24:54.039594 | orchestrator | Monday 06 April 2026 05:24:51 +0000 (0:00:00.487) 0:17:21.665 ********** 2026-04-06 05:24:54.039613 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.039631 | orchestrator | 2026-04-06 05:24:54.039649 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:24:54.039669 | orchestrator | Monday 06 April 2026 05:24:52 +0000 (0:00:00.164) 0:17:21.830 ********** 2026-04-06 05:24:54.039688 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.039706 | orchestrator | 2026-04-06 05:24:54.039726 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:24:54.039744 | orchestrator | Monday 06 April 2026 05:24:52 +0000 (0:00:00.159) 0:17:21.989 ********** 2026-04-06 05:24:54.039763 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.039781 | orchestrator | 2026-04-06 05:24:54.039799 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:24:54.039818 | orchestrator | Monday 06 April 2026 05:24:52 +0000 (0:00:00.142) 0:17:22.132 ********** 2026-04-06 05:24:54.039837 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-06 05:24:54.039855 | orchestrator | 2026-04-06 05:24:54.039874 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:24:54.039893 | orchestrator | Monday 06 April 2026 05:24:52 +0000 (0:00:00.211) 0:17:22.343 ********** 2026-04-06 05:24:54.039928 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:24:54.039946 | orchestrator | 2026-04-06 05:24:54.039964 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:24:54.039983 | orchestrator | Monday 06 April 2026 05:24:53 +0000 (0:00:00.709) 0:17:23.053 ********** 2026-04-06 05:24:54.040001 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:24:54.040018 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:24:54.040035 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:24:54.040052 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.040071 | orchestrator | 2026-04-06 05:24:54.040089 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:24:54.040106 | orchestrator | Monday 06 April 2026 05:24:53 +0000 (0:00:00.163) 0:17:23.217 ********** 2026-04-06 05:24:54.040121 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:24:54.040136 | orchestrator | 2026-04-06 05:24:54.040152 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:24:54.040259 | orchestrator | Monday 06 April 2026 05:24:53 +0000 (0:00:00.434) 0:17:23.651 ********** 2026-04-06 05:24:54.040305 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997044 | orchestrator | 2026-04-06 05:25:10.997178 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:25:10.997241 | orchestrator | Monday 06 April 2026 05:24:54 +0000 (0:00:00.189) 0:17:23.840 ********** 2026-04-06 05:25:10.997254 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997267 | orchestrator | 2026-04-06 05:25:10.997278 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:25:10.997305 | orchestrator | Monday 06 April 2026 05:24:54 +0000 (0:00:00.155) 0:17:23.995 ********** 2026-04-06 05:25:10.997317 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997328 | orchestrator | 2026-04-06 05:25:10.997339 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:25:10.997350 | orchestrator | Monday 06 April 2026 05:24:54 +0000 (0:00:00.152) 0:17:24.148 ********** 2026-04-06 05:25:10.997361 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997372 | orchestrator | 2026-04-06 05:25:10.997383 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:25:10.997395 | orchestrator | Monday 06 April 2026 05:24:54 +0000 (0:00:00.149) 0:17:24.298 ********** 2026-04-06 05:25:10.997406 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:25:10.997418 | orchestrator | 2026-04-06 05:25:10.997430 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:25:10.997441 | orchestrator | Monday 06 April 2026 05:24:56 +0000 (0:00:01.559) 0:17:25.857 ********** 2026-04-06 05:25:10.997452 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:25:10.997464 | orchestrator | 2026-04-06 05:25:10.997475 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:25:10.997486 | orchestrator | Monday 06 April 2026 05:24:56 +0000 (0:00:00.135) 0:17:25.993 ********** 2026-04-06 05:25:10.997497 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-06 05:25:10.997508 | orchestrator | 2026-04-06 05:25:10.997519 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:25:10.997530 | orchestrator | Monday 06 April 2026 05:24:56 +0000 (0:00:00.233) 0:17:26.227 ********** 2026-04-06 05:25:10.997541 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997552 | orchestrator | 2026-04-06 05:25:10.997563 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:25:10.997575 | orchestrator | Monday 06 April 2026 05:24:56 +0000 (0:00:00.149) 0:17:26.376 ********** 2026-04-06 05:25:10.997588 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997601 | orchestrator | 2026-04-06 05:25:10.997615 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:25:10.997652 | orchestrator | Monday 06 April 2026 05:24:56 +0000 (0:00:00.157) 0:17:26.533 ********** 2026-04-06 05:25:10.997665 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997678 | orchestrator | 2026-04-06 05:25:10.997692 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:25:10.997704 | orchestrator | Monday 06 April 2026 05:24:56 +0000 (0:00:00.173) 0:17:26.707 ********** 2026-04-06 05:25:10.997717 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997729 | orchestrator | 2026-04-06 05:25:10.997742 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:25:10.997755 | orchestrator | Monday 06 April 2026 05:24:57 +0000 (0:00:00.146) 0:17:26.854 ********** 2026-04-06 05:25:10.997768 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997781 | orchestrator | 2026-04-06 05:25:10.997793 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:25:10.997807 | orchestrator | Monday 06 April 2026 05:24:57 +0000 (0:00:00.440) 0:17:27.294 ********** 2026-04-06 05:25:10.997820 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997832 | orchestrator | 2026-04-06 05:25:10.997845 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:25:10.997858 | orchestrator | Monday 06 April 2026 05:24:57 +0000 (0:00:00.171) 0:17:27.465 ********** 2026-04-06 05:25:10.997870 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997884 | orchestrator | 2026-04-06 05:25:10.997896 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:25:10.997909 | orchestrator | Monday 06 April 2026 05:24:57 +0000 (0:00:00.207) 0:17:27.673 ********** 2026-04-06 05:25:10.997921 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.997934 | orchestrator | 2026-04-06 05:25:10.997947 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:25:10.997958 | orchestrator | Monday 06 April 2026 05:24:58 +0000 (0:00:00.154) 0:17:27.827 ********** 2026-04-06 05:25:10.997969 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:25:10.997980 | orchestrator | 2026-04-06 05:25:10.997991 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:25:10.998002 | orchestrator | Monday 06 April 2026 05:24:58 +0000 (0:00:00.270) 0:17:28.098 ********** 2026-04-06 05:25:10.998068 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-06 05:25:10.998083 | orchestrator | 2026-04-06 05:25:10.998094 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:25:10.998106 | orchestrator | Monday 06 April 2026 05:24:58 +0000 (0:00:00.218) 0:17:28.316 ********** 2026-04-06 05:25:10.998117 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-06 05:25:10.998128 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-06 05:25:10.998139 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-06 05:25:10.998150 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-06 05:25:10.998161 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-06 05:25:10.998171 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-06 05:25:10.998214 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-06 05:25:10.998226 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:25:10.998237 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:25:10.998266 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:25:10.998277 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:25:10.998288 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:25:10.998299 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:25:10.998310 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:25:10.998327 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-06 05:25:10.998347 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-06 05:25:10.998358 | orchestrator | 2026-04-06 05:25:10.998369 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:25:10.998380 | orchestrator | Monday 06 April 2026 05:25:04 +0000 (0:00:05.535) 0:17:33.852 ********** 2026-04-06 05:25:10.998391 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-06 05:25:10.998402 | orchestrator | 2026-04-06 05:25:10.998413 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-06 05:25:10.998424 | orchestrator | Monday 06 April 2026 05:25:04 +0000 (0:00:00.223) 0:17:34.075 ********** 2026-04-06 05:25:10.998435 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:25:10.998448 | orchestrator | 2026-04-06 05:25:10.998459 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-06 05:25:10.998469 | orchestrator | Monday 06 April 2026 05:25:04 +0000 (0:00:00.541) 0:17:34.616 ********** 2026-04-06 05:25:10.998481 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:25:10.998492 | orchestrator | 2026-04-06 05:25:10.998503 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:25:10.998514 | orchestrator | Monday 06 April 2026 05:25:05 +0000 (0:00:00.963) 0:17:35.580 ********** 2026-04-06 05:25:10.998525 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.998536 | orchestrator | 2026-04-06 05:25:10.998547 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:25:10.998558 | orchestrator | Monday 06 April 2026 05:25:06 +0000 (0:00:00.434) 0:17:36.014 ********** 2026-04-06 05:25:10.998569 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.998580 | orchestrator | 2026-04-06 05:25:10.998590 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:25:10.998601 | orchestrator | Monday 06 April 2026 05:25:06 +0000 (0:00:00.124) 0:17:36.138 ********** 2026-04-06 05:25:10.998612 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.998623 | orchestrator | 2026-04-06 05:25:10.998634 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:25:10.998645 | orchestrator | Monday 06 April 2026 05:25:06 +0000 (0:00:00.123) 0:17:36.262 ********** 2026-04-06 05:25:10.998656 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.998667 | orchestrator | 2026-04-06 05:25:10.998677 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:25:10.998688 | orchestrator | Monday 06 April 2026 05:25:06 +0000 (0:00:00.125) 0:17:36.387 ********** 2026-04-06 05:25:10.998699 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.998710 | orchestrator | 2026-04-06 05:25:10.998721 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:25:10.998732 | orchestrator | Monday 06 April 2026 05:25:06 +0000 (0:00:00.122) 0:17:36.510 ********** 2026-04-06 05:25:10.998743 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.998754 | orchestrator | 2026-04-06 05:25:10.998765 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:25:10.998776 | orchestrator | Monday 06 April 2026 05:25:06 +0000 (0:00:00.107) 0:17:36.617 ********** 2026-04-06 05:25:10.998787 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.998798 | orchestrator | 2026-04-06 05:25:10.998809 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:25:10.998820 | orchestrator | Monday 06 April 2026 05:25:07 +0000 (0:00:00.121) 0:17:36.738 ********** 2026-04-06 05:25:10.998831 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.998842 | orchestrator | 2026-04-06 05:25:10.998852 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:25:10.998870 | orchestrator | Monday 06 April 2026 05:25:07 +0000 (0:00:00.128) 0:17:36.867 ********** 2026-04-06 05:25:10.998881 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.998892 | orchestrator | 2026-04-06 05:25:10.998903 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:25:10.998914 | orchestrator | Monday 06 April 2026 05:25:07 +0000 (0:00:00.124) 0:17:36.991 ********** 2026-04-06 05:25:10.998925 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:10.998936 | orchestrator | 2026-04-06 05:25:10.998947 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:25:10.998957 | orchestrator | Monday 06 April 2026 05:25:07 +0000 (0:00:00.124) 0:17:37.116 ********** 2026-04-06 05:25:10.998968 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:25:10.998979 | orchestrator | 2026-04-06 05:25:10.998990 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:25:10.999001 | orchestrator | Monday 06 April 2026 05:25:07 +0000 (0:00:00.198) 0:17:37.315 ********** 2026-04-06 05:25:10.999012 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:25:10.999023 | orchestrator | 2026-04-06 05:25:10.999034 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:25:10.999045 | orchestrator | Monday 06 April 2026 05:25:10 +0000 (0:00:03.292) 0:17:40.608 ********** 2026-04-06 05:25:10.999062 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:25:31.684849 | orchestrator | 2026-04-06 05:25:31.684952 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:25:31.684963 | orchestrator | Monday 06 April 2026 05:25:11 +0000 (0:00:00.188) 0:17:40.796 ********** 2026-04-06 05:25:31.684985 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-06 05:25:31.685004 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-06 05:25:31.685066 | orchestrator | 2026-04-06 05:25:31.685074 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:25:31.685081 | orchestrator | Monday 06 April 2026 05:25:18 +0000 (0:00:07.278) 0:17:48.075 ********** 2026-04-06 05:25:31.685088 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685096 | orchestrator | 2026-04-06 05:25:31.685102 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:25:31.685108 | orchestrator | Monday 06 April 2026 05:25:18 +0000 (0:00:00.146) 0:17:48.221 ********** 2026-04-06 05:25:31.685115 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685121 | orchestrator | 2026-04-06 05:25:31.685128 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:25:31.685136 | orchestrator | Monday 06 April 2026 05:25:18 +0000 (0:00:00.141) 0:17:48.363 ********** 2026-04-06 05:25:31.685142 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685148 | orchestrator | 2026-04-06 05:25:31.685155 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:25:31.685161 | orchestrator | Monday 06 April 2026 05:25:18 +0000 (0:00:00.170) 0:17:48.533 ********** 2026-04-06 05:25:31.685167 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685173 | orchestrator | 2026-04-06 05:25:31.685180 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:25:31.685186 | orchestrator | Monday 06 April 2026 05:25:18 +0000 (0:00:00.157) 0:17:48.690 ********** 2026-04-06 05:25:31.685231 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685239 | orchestrator | 2026-04-06 05:25:31.685245 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:25:31.685251 | orchestrator | Monday 06 April 2026 05:25:19 +0000 (0:00:00.157) 0:17:48.847 ********** 2026-04-06 05:25:31.685257 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:25:31.685264 | orchestrator | 2026-04-06 05:25:31.685271 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:25:31.685277 | orchestrator | Monday 06 April 2026 05:25:19 +0000 (0:00:00.245) 0:17:49.093 ********** 2026-04-06 05:25:31.685283 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:25:31.685290 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:25:31.685296 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:25:31.685302 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685308 | orchestrator | 2026-04-06 05:25:31.685315 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:25:31.685321 | orchestrator | Monday 06 April 2026 05:25:19 +0000 (0:00:00.441) 0:17:49.535 ********** 2026-04-06 05:25:31.685327 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:25:31.685333 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:25:31.685339 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:25:31.685345 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685352 | orchestrator | 2026-04-06 05:25:31.685358 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:25:31.685364 | orchestrator | Monday 06 April 2026 05:25:20 +0000 (0:00:00.467) 0:17:50.003 ********** 2026-04-06 05:25:31.685370 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:25:31.685376 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:25:31.685382 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:25:31.685389 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685395 | orchestrator | 2026-04-06 05:25:31.685401 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:25:31.685407 | orchestrator | Monday 06 April 2026 05:25:20 +0000 (0:00:00.452) 0:17:50.456 ********** 2026-04-06 05:25:31.685415 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:25:31.685422 | orchestrator | 2026-04-06 05:25:31.685429 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:25:31.685437 | orchestrator | Monday 06 April 2026 05:25:20 +0000 (0:00:00.183) 0:17:50.639 ********** 2026-04-06 05:25:31.685444 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 05:25:31.685452 | orchestrator | 2026-04-06 05:25:31.685459 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:25:31.685467 | orchestrator | Monday 06 April 2026 05:25:21 +0000 (0:00:00.426) 0:17:51.066 ********** 2026-04-06 05:25:31.685474 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:25:31.685481 | orchestrator | 2026-04-06 05:25:31.685489 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-06 05:25:31.685497 | orchestrator | Monday 06 April 2026 05:25:22 +0000 (0:00:01.131) 0:17:52.198 ********** 2026-04-06 05:25:31.685504 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:25:31.685511 | orchestrator | 2026-04-06 05:25:31.685531 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:25:31.685539 | orchestrator | Monday 06 April 2026 05:25:22 +0000 (0:00:00.145) 0:17:52.344 ********** 2026-04-06 05:25:31.685546 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:25:31.685554 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:25:31.685565 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:25:31.685578 | orchestrator | 2026-04-06 05:25:31.685585 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-06 05:25:31.685592 | orchestrator | Monday 06 April 2026 05:25:23 +0000 (0:00:00.731) 0:17:53.075 ********** 2026-04-06 05:25:31.685600 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-04-06 05:25:31.685607 | orchestrator | 2026-04-06 05:25:31.685614 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-06 05:25:31.685621 | orchestrator | Monday 06 April 2026 05:25:23 +0000 (0:00:00.218) 0:17:53.293 ********** 2026-04-06 05:25:31.685628 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685636 | orchestrator | 2026-04-06 05:25:31.685643 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-06 05:25:31.685650 | orchestrator | Monday 06 April 2026 05:25:23 +0000 (0:00:00.124) 0:17:53.418 ********** 2026-04-06 05:25:31.685657 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685664 | orchestrator | 2026-04-06 05:25:31.685671 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-06 05:25:31.685678 | orchestrator | Monday 06 April 2026 05:25:23 +0000 (0:00:00.128) 0:17:53.546 ********** 2026-04-06 05:25:31.685686 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:25:31.685693 | orchestrator | 2026-04-06 05:25:31.685700 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-06 05:25:31.685707 | orchestrator | Monday 06 April 2026 05:25:24 +0000 (0:00:00.476) 0:17:54.022 ********** 2026-04-06 05:25:31.685715 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:25:31.685722 | orchestrator | 2026-04-06 05:25:31.685730 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-06 05:25:31.685739 | orchestrator | Monday 06 April 2026 05:25:24 +0000 (0:00:00.169) 0:17:54.192 ********** 2026-04-06 05:25:31.685747 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-06 05:25:31.685754 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-06 05:25:31.685762 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-06 05:25:31.685769 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-06 05:25:31.685777 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-06 05:25:31.685793 | orchestrator | 2026-04-06 05:25:31.685807 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-06 05:25:31.685815 | orchestrator | Monday 06 April 2026 05:25:26 +0000 (0:00:01.807) 0:17:55.999 ********** 2026-04-06 05:25:31.685823 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685830 | orchestrator | 2026-04-06 05:25:31.685837 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-06 05:25:31.685843 | orchestrator | Monday 06 April 2026 05:25:26 +0000 (0:00:00.123) 0:17:56.123 ********** 2026-04-06 05:25:31.685849 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-04-06 05:25:31.685855 | orchestrator | 2026-04-06 05:25:31.685861 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-06 05:25:31.685868 | orchestrator | Monday 06 April 2026 05:25:26 +0000 (0:00:00.223) 0:17:56.346 ********** 2026-04-06 05:25:31.685874 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-06 05:25:31.685880 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-06 05:25:31.685886 | orchestrator | 2026-04-06 05:25:31.685892 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-06 05:25:31.685898 | orchestrator | Monday 06 April 2026 05:25:27 +0000 (0:00:01.103) 0:17:57.449 ********** 2026-04-06 05:25:31.685905 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:25:31.685911 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-06 05:25:31.685917 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:25:31.685927 | orchestrator | 2026-04-06 05:25:31.685934 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:25:31.685940 | orchestrator | Monday 06 April 2026 05:25:30 +0000 (0:00:02.420) 0:17:59.870 ********** 2026-04-06 05:25:31.685946 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-06 05:25:31.685952 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-06 05:25:31.685958 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:25:31.685965 | orchestrator | 2026-04-06 05:25:31.685971 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-06 05:25:31.685977 | orchestrator | Monday 06 April 2026 05:25:31 +0000 (0:00:01.017) 0:18:00.888 ********** 2026-04-06 05:25:31.685983 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.685989 | orchestrator | 2026-04-06 05:25:31.685995 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-06 05:25:31.686001 | orchestrator | Monday 06 April 2026 05:25:31 +0000 (0:00:00.227) 0:18:01.115 ********** 2026-04-06 05:25:31.686008 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.686014 | orchestrator | 2026-04-06 05:25:31.686054 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-06 05:25:31.686060 | orchestrator | Monday 06 April 2026 05:25:31 +0000 (0:00:00.133) 0:18:01.248 ********** 2026-04-06 05:25:31.686066 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:25:31.686073 | orchestrator | 2026-04-06 05:25:31.686083 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-06 05:26:10.238833 | orchestrator | Monday 06 April 2026 05:25:31 +0000 (0:00:00.145) 0:18:01.393 ********** 2026-04-06 05:26:10.238949 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-04-06 05:26:10.238971 | orchestrator | 2026-04-06 05:26:10.238993 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-06 05:26:10.239031 | orchestrator | Monday 06 April 2026 05:25:31 +0000 (0:00:00.201) 0:18:01.595 ********** 2026-04-06 05:26:10.239051 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:26:10.239072 | orchestrator | 2026-04-06 05:26:10.239091 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-06 05:26:10.239111 | orchestrator | Monday 06 April 2026 05:25:32 +0000 (0:00:00.452) 0:18:02.048 ********** 2026-04-06 05:26:10.239130 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:26:10.239149 | orchestrator | 2026-04-06 05:26:10.239164 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-06 05:26:10.239175 | orchestrator | Monday 06 April 2026 05:25:34 +0000 (0:00:02.356) 0:18:04.404 ********** 2026-04-06 05:26:10.239186 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-04-06 05:26:10.239197 | orchestrator | 2026-04-06 05:26:10.239208 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-06 05:26:10.239249 | orchestrator | Monday 06 April 2026 05:25:34 +0000 (0:00:00.196) 0:18:04.601 ********** 2026-04-06 05:26:10.239262 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:26:10.239273 | orchestrator | 2026-04-06 05:26:10.239284 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-06 05:26:10.239295 | orchestrator | Monday 06 April 2026 05:25:35 +0000 (0:00:00.960) 0:18:05.561 ********** 2026-04-06 05:26:10.239306 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:26:10.239317 | orchestrator | 2026-04-06 05:26:10.239328 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-06 05:26:10.239339 | orchestrator | Monday 06 April 2026 05:25:37 +0000 (0:00:01.238) 0:18:06.799 ********** 2026-04-06 05:26:10.239352 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:26:10.239365 | orchestrator | 2026-04-06 05:26:10.239378 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-06 05:26:10.239390 | orchestrator | Monday 06 April 2026 05:25:38 +0000 (0:00:01.179) 0:18:07.979 ********** 2026-04-06 05:26:10.239402 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.239416 | orchestrator | 2026-04-06 05:26:10.239429 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-06 05:26:10.239481 | orchestrator | Monday 06 April 2026 05:25:38 +0000 (0:00:00.137) 0:18:08.116 ********** 2026-04-06 05:26:10.239503 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.239522 | orchestrator | 2026-04-06 05:26:10.239541 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-06 05:26:10.239562 | orchestrator | Monday 06 April 2026 05:25:38 +0000 (0:00:00.153) 0:18:08.270 ********** 2026-04-06 05:26:10.239580 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 05:26:10.239598 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-06 05:26:10.239611 | orchestrator | 2026-04-06 05:26:10.239623 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-06 05:26:10.239636 | orchestrator | Monday 06 April 2026 05:25:39 +0000 (0:00:00.834) 0:18:09.105 ********** 2026-04-06 05:26:10.239649 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 05:26:10.239661 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-06 05:26:10.239674 | orchestrator | 2026-04-06 05:26:10.239687 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-06 05:26:10.239700 | orchestrator | Monday 06 April 2026 05:25:41 +0000 (0:00:01.862) 0:18:10.968 ********** 2026-04-06 05:26:10.239711 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-06 05:26:10.239722 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-06 05:26:10.239733 | orchestrator | 2026-04-06 05:26:10.239744 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-06 05:26:10.239755 | orchestrator | Monday 06 April 2026 05:25:44 +0000 (0:00:03.622) 0:18:14.590 ********** 2026-04-06 05:26:10.239766 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.239777 | orchestrator | 2026-04-06 05:26:10.239788 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-06 05:26:10.239799 | orchestrator | Monday 06 April 2026 05:25:45 +0000 (0:00:00.241) 0:18:14.832 ********** 2026-04-06 05:26:10.239810 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.239821 | orchestrator | 2026-04-06 05:26:10.239832 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-06 05:26:10.239843 | orchestrator | Monday 06 April 2026 05:25:45 +0000 (0:00:00.265) 0:18:15.098 ********** 2026-04-06 05:26:10.239854 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.239864 | orchestrator | 2026-04-06 05:26:10.239875 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-06 05:26:10.239886 | orchestrator | Monday 06 April 2026 05:25:45 +0000 (0:00:00.281) 0:18:15.380 ********** 2026-04-06 05:26:10.239897 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.239908 | orchestrator | 2026-04-06 05:26:10.239919 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-06 05:26:10.239930 | orchestrator | Monday 06 April 2026 05:25:45 +0000 (0:00:00.120) 0:18:15.501 ********** 2026-04-06 05:26:10.239941 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.239952 | orchestrator | 2026-04-06 05:26:10.239963 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-06 05:26:10.239974 | orchestrator | Monday 06 April 2026 05:25:45 +0000 (0:00:00.114) 0:18:15.615 ********** 2026-04-06 05:26:10.239985 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-06 05:26:10.239997 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-06 05:26:10.240008 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-04-06 05:26:10.240039 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-04-06 05:26:10.240051 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-04-06 05:26:10.240070 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (595 retries left). 2026-04-06 05:26:10.240091 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:26:10.240102 | orchestrator | 2026-04-06 05:26:10.240114 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 05:26:10.240133 | orchestrator | Monday 06 April 2026 05:26:05 +0000 (0:00:19.847) 0:18:35.463 ********** 2026-04-06 05:26:10.240152 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.240169 | orchestrator | 2026-04-06 05:26:10.240187 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-06 05:26:10.240205 | orchestrator | Monday 06 April 2026 05:26:05 +0000 (0:00:00.133) 0:18:35.596 ********** 2026-04-06 05:26:10.240250 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.240269 | orchestrator | 2026-04-06 05:26:10.240289 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-06 05:26:10.240307 | orchestrator | Monday 06 April 2026 05:26:06 +0000 (0:00:00.139) 0:18:35.736 ********** 2026-04-06 05:26:10.240325 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.240344 | orchestrator | 2026-04-06 05:26:10.240363 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-06 05:26:10.240381 | orchestrator | Monday 06 April 2026 05:26:06 +0000 (0:00:00.141) 0:18:35.878 ********** 2026-04-06 05:26:10.240401 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.240419 | orchestrator | 2026-04-06 05:26:10.240438 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-06 05:26:10.240449 | orchestrator | Monday 06 April 2026 05:26:06 +0000 (0:00:00.131) 0:18:36.010 ********** 2026-04-06 05:26:10.240460 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.240471 | orchestrator | 2026-04-06 05:26:10.240482 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-06 05:26:10.240492 | orchestrator | Monday 06 April 2026 05:26:06 +0000 (0:00:00.154) 0:18:36.164 ********** 2026-04-06 05:26:10.240503 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.240514 | orchestrator | 2026-04-06 05:26:10.240525 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-06 05:26:10.240536 | orchestrator | Monday 06 April 2026 05:26:06 +0000 (0:00:00.141) 0:18:36.306 ********** 2026-04-06 05:26:10.240546 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:26:10.240557 | orchestrator | 2026-04-06 05:26:10.240575 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-06 05:26:10.240592 | orchestrator | 2026-04-06 05:26:10.240610 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:26:10.240628 | orchestrator | Monday 06 April 2026 05:26:07 +0000 (0:00:00.642) 0:18:36.948 ********** 2026-04-06 05:26:10.240644 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-06 05:26:10.240660 | orchestrator | 2026-04-06 05:26:10.240677 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:26:10.240693 | orchestrator | Monday 06 April 2026 05:26:07 +0000 (0:00:00.264) 0:18:37.213 ********** 2026-04-06 05:26:10.240711 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:10.240730 | orchestrator | 2026-04-06 05:26:10.240749 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:26:10.240767 | orchestrator | Monday 06 April 2026 05:26:07 +0000 (0:00:00.426) 0:18:37.639 ********** 2026-04-06 05:26:10.240784 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:10.240803 | orchestrator | 2026-04-06 05:26:10.240821 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:26:10.240839 | orchestrator | Monday 06 April 2026 05:26:08 +0000 (0:00:00.483) 0:18:38.123 ********** 2026-04-06 05:26:10.240856 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:10.240874 | orchestrator | 2026-04-06 05:26:10.240892 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:26:10.240910 | orchestrator | Monday 06 April 2026 05:26:08 +0000 (0:00:00.464) 0:18:38.587 ********** 2026-04-06 05:26:10.240927 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:10.240961 | orchestrator | 2026-04-06 05:26:10.240980 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:26:10.240998 | orchestrator | Monday 06 April 2026 05:26:09 +0000 (0:00:00.132) 0:18:38.720 ********** 2026-04-06 05:26:10.241016 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:10.241034 | orchestrator | 2026-04-06 05:26:10.241052 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:26:10.241070 | orchestrator | Monday 06 April 2026 05:26:09 +0000 (0:00:00.159) 0:18:38.880 ********** 2026-04-06 05:26:10.241090 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:10.241108 | orchestrator | 2026-04-06 05:26:10.241127 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:26:10.241146 | orchestrator | Monday 06 April 2026 05:26:09 +0000 (0:00:00.164) 0:18:39.045 ********** 2026-04-06 05:26:10.241164 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:10.241179 | orchestrator | 2026-04-06 05:26:10.241190 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:26:10.241201 | orchestrator | Monday 06 April 2026 05:26:09 +0000 (0:00:00.150) 0:18:39.195 ********** 2026-04-06 05:26:10.241212 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:10.241248 | orchestrator | 2026-04-06 05:26:10.241259 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:26:10.241270 | orchestrator | Monday 06 April 2026 05:26:09 +0000 (0:00:00.145) 0:18:39.341 ********** 2026-04-06 05:26:10.241281 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:26:10.241292 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:26:10.241315 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:26:18.074279 | orchestrator | 2026-04-06 05:26:18.074386 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:26:18.074400 | orchestrator | Monday 06 April 2026 05:26:10 +0000 (0:00:00.704) 0:18:40.046 ********** 2026-04-06 05:26:18.074409 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:18.074421 | orchestrator | 2026-04-06 05:26:18.074450 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:26:18.074463 | orchestrator | Monday 06 April 2026 05:26:10 +0000 (0:00:00.246) 0:18:40.292 ********** 2026-04-06 05:26:18.074476 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:26:18.074488 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:26:18.074499 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:26:18.074511 | orchestrator | 2026-04-06 05:26:18.074522 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:26:18.074534 | orchestrator | Monday 06 April 2026 05:26:12 +0000 (0:00:02.151) 0:18:42.443 ********** 2026-04-06 05:26:18.074547 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-06 05:26:18.074560 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-06 05:26:18.074573 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-06 05:26:18.074586 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:18.074601 | orchestrator | 2026-04-06 05:26:18.074613 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:26:18.074626 | orchestrator | Monday 06 April 2026 05:26:13 +0000 (0:00:00.475) 0:18:42.919 ********** 2026-04-06 05:26:18.074642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:26:18.074655 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:26:18.074685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:26:18.074694 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:18.074702 | orchestrator | 2026-04-06 05:26:18.074710 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:26:18.074718 | orchestrator | Monday 06 April 2026 05:26:14 +0000 (0:00:00.950) 0:18:43.870 ********** 2026-04-06 05:26:18.074728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:18.074739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:18.074748 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:18.074756 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:18.074764 | orchestrator | 2026-04-06 05:26:18.074772 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:26:18.074780 | orchestrator | Monday 06 April 2026 05:26:14 +0000 (0:00:00.164) 0:18:44.034 ********** 2026-04-06 05:26:18.074805 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:26:11.100882', 'end': '2026-04-06 05:26:11.149126', 'delta': '0:00:00.048244', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:26:18.074834 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:26:11.691057', 'end': '2026-04-06 05:26:11.740805', 'delta': '0:00:00.049748', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:26:18.074844 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:26:12.535721', 'end': '2026-04-06 05:26:12.582165', 'delta': '0:00:00.046444', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:26:18.074862 | orchestrator | 2026-04-06 05:26:18.074872 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:26:18.074881 | orchestrator | Monday 06 April 2026 05:26:14 +0000 (0:00:00.231) 0:18:44.266 ********** 2026-04-06 05:26:18.074891 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:18.074901 | orchestrator | 2026-04-06 05:26:18.074910 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:26:18.074920 | orchestrator | Monday 06 April 2026 05:26:15 +0000 (0:00:00.988) 0:18:45.254 ********** 2026-04-06 05:26:18.074929 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:18.074939 | orchestrator | 2026-04-06 05:26:18.074948 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:26:18.074957 | orchestrator | Monday 06 April 2026 05:26:15 +0000 (0:00:00.252) 0:18:45.506 ********** 2026-04-06 05:26:18.074966 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:18.074976 | orchestrator | 2026-04-06 05:26:18.074985 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:26:18.074994 | orchestrator | Monday 06 April 2026 05:26:15 +0000 (0:00:00.153) 0:18:45.659 ********** 2026-04-06 05:26:18.075004 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:26:18.075014 | orchestrator | 2026-04-06 05:26:18.075023 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:26:18.075032 | orchestrator | Monday 06 April 2026 05:26:16 +0000 (0:00:01.005) 0:18:46.665 ********** 2026-04-06 05:26:18.075042 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:18.075051 | orchestrator | 2026-04-06 05:26:18.075060 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:26:18.075069 | orchestrator | Monday 06 April 2026 05:26:17 +0000 (0:00:00.155) 0:18:46.820 ********** 2026-04-06 05:26:18.075079 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:18.075088 | orchestrator | 2026-04-06 05:26:18.075098 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:26:18.075107 | orchestrator | Monday 06 April 2026 05:26:17 +0000 (0:00:00.129) 0:18:46.950 ********** 2026-04-06 05:26:18.075116 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:18.075125 | orchestrator | 2026-04-06 05:26:18.075134 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:26:18.075143 | orchestrator | Monday 06 April 2026 05:26:17 +0000 (0:00:00.269) 0:18:47.219 ********** 2026-04-06 05:26:18.075153 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:18.075163 | orchestrator | 2026-04-06 05:26:18.075173 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:26:18.075181 | orchestrator | Monday 06 April 2026 05:26:17 +0000 (0:00:00.132) 0:18:47.352 ********** 2026-04-06 05:26:18.075189 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:18.075197 | orchestrator | 2026-04-06 05:26:18.075204 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:26:18.075212 | orchestrator | Monday 06 April 2026 05:26:17 +0000 (0:00:00.117) 0:18:47.470 ********** 2026-04-06 05:26:18.075252 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:18.075263 | orchestrator | 2026-04-06 05:26:18.075271 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:26:18.075279 | orchestrator | Monday 06 April 2026 05:26:17 +0000 (0:00:00.177) 0:18:47.647 ********** 2026-04-06 05:26:18.075287 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:18.075300 | orchestrator | 2026-04-06 05:26:18.075308 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:26:18.075322 | orchestrator | Monday 06 April 2026 05:26:18 +0000 (0:00:00.141) 0:18:47.788 ********** 2026-04-06 05:26:19.062807 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:19.062914 | orchestrator | 2026-04-06 05:26:19.062933 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:26:19.062949 | orchestrator | Monday 06 April 2026 05:26:18 +0000 (0:00:00.160) 0:18:47.949 ********** 2026-04-06 05:26:19.062980 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:19.062995 | orchestrator | 2026-04-06 05:26:19.063010 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:26:19.063024 | orchestrator | Monday 06 April 2026 05:26:18 +0000 (0:00:00.126) 0:18:48.076 ********** 2026-04-06 05:26:19.063038 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:19.063052 | orchestrator | 2026-04-06 05:26:19.063066 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:26:19.063080 | orchestrator | Monday 06 April 2026 05:26:18 +0000 (0:00:00.489) 0:18:48.565 ********** 2026-04-06 05:26:19.063097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:26:19.063116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'uuids': ['22ded8c8-9142-404c-a572-856e0a8f4fba'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP']}})  2026-04-06 05:26:19.063134 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd180ec14', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:26:19.063150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447']}})  2026-04-06 05:26:19.063166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:26:19.063206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:26:19.063282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:26:19.063301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:26:19.063316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt', 'dm-uuid-CRYPT-LUKS2-0cb92a9095ac4932ba9885def0a3f871-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:26:19.063331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:26:19.063348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'uuids': ['0cb92a90-95ac-4932-ba98-85def0a3f871'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt']}})  2026-04-06 05:26:19.063364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742']}})  2026-04-06 05:26:19.063390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:26:19.063431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd99642af', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:26:19.376616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:26:19.376718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:26:19.376735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP', 'dm-uuid-CRYPT-LUKS2-22ded8c89142404ca572856e0a8f4fba-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:26:19.376779 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:19.376793 | orchestrator | 2026-04-06 05:26:19.376805 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:26:19.376817 | orchestrator | Monday 06 April 2026 05:26:19 +0000 (0:00:00.342) 0:18:48.908 ********** 2026-04-06 05:26:19.376829 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:19.376859 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'uuids': ['22ded8c8-9142-404c-a572-856e0a8f4fba'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:19.376873 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd180ec14', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:19.376904 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:19.376937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:19.376950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:19.376967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:19.376979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:19.376999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt', 'dm-uuid-CRYPT-LUKS2-0cb92a9095ac4932ba9885def0a3f871-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:20.661837 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:20.661999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'uuids': ['0cb92a90-95ac-4932-ba98-85def0a3f871'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:20.662121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:20.662150 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:20.662199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd99642af', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:20.662267 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:20.662300 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:20.662316 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP', 'dm-uuid-CRYPT-LUKS2-22ded8c89142404ca572856e0a8f4fba-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:26:20.662329 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:20.662343 | orchestrator | 2026-04-06 05:26:20.662356 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:26:20.662369 | orchestrator | Monday 06 April 2026 05:26:19 +0000 (0:00:00.380) 0:18:49.289 ********** 2026-04-06 05:26:20.662382 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:20.662395 | orchestrator | 2026-04-06 05:26:20.662409 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:26:20.662422 | orchestrator | Monday 06 April 2026 05:26:20 +0000 (0:00:00.506) 0:18:49.795 ********** 2026-04-06 05:26:20.662435 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:20.662447 | orchestrator | 2026-04-06 05:26:20.662460 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:26:20.662473 | orchestrator | Monday 06 April 2026 05:26:20 +0000 (0:00:00.128) 0:18:49.924 ********** 2026-04-06 05:26:20.662486 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:20.662498 | orchestrator | 2026-04-06 05:26:20.662511 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:26:20.662541 | orchestrator | Monday 06 April 2026 05:26:20 +0000 (0:00:00.451) 0:18:50.376 ********** 2026-04-06 05:26:35.189560 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.189675 | orchestrator | 2026-04-06 05:26:35.189692 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:26:35.189705 | orchestrator | Monday 06 April 2026 05:26:20 +0000 (0:00:00.132) 0:18:50.508 ********** 2026-04-06 05:26:35.189716 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.189727 | orchestrator | 2026-04-06 05:26:35.189739 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:26:35.189750 | orchestrator | Monday 06 April 2026 05:26:21 +0000 (0:00:00.232) 0:18:50.741 ********** 2026-04-06 05:26:35.189761 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.189772 | orchestrator | 2026-04-06 05:26:35.189783 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:26:35.189794 | orchestrator | Monday 06 April 2026 05:26:21 +0000 (0:00:00.145) 0:18:50.886 ********** 2026-04-06 05:26:35.189806 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-06 05:26:35.189817 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-06 05:26:35.189828 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-06 05:26:35.189839 | orchestrator | 2026-04-06 05:26:35.189849 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:26:35.189860 | orchestrator | Monday 06 April 2026 05:26:22 +0000 (0:00:00.997) 0:18:51.884 ********** 2026-04-06 05:26:35.189872 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-06 05:26:35.189883 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-06 05:26:35.189893 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-06 05:26:35.189904 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.189915 | orchestrator | 2026-04-06 05:26:35.189926 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:26:35.189937 | orchestrator | Monday 06 April 2026 05:26:22 +0000 (0:00:00.163) 0:18:52.047 ********** 2026-04-06 05:26:35.189947 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-06 05:26:35.189959 | orchestrator | 2026-04-06 05:26:35.189970 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:26:35.189982 | orchestrator | Monday 06 April 2026 05:26:22 +0000 (0:00:00.213) 0:18:52.261 ********** 2026-04-06 05:26:35.189993 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.190004 | orchestrator | 2026-04-06 05:26:35.190075 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:26:35.190092 | orchestrator | Monday 06 April 2026 05:26:23 +0000 (0:00:00.494) 0:18:52.755 ********** 2026-04-06 05:26:35.190105 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.190119 | orchestrator | 2026-04-06 05:26:35.190132 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:26:35.190144 | orchestrator | Monday 06 April 2026 05:26:23 +0000 (0:00:00.160) 0:18:52.915 ********** 2026-04-06 05:26:35.190157 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.190170 | orchestrator | 2026-04-06 05:26:35.190183 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:26:35.190206 | orchestrator | Monday 06 April 2026 05:26:23 +0000 (0:00:00.149) 0:18:53.065 ********** 2026-04-06 05:26:35.190275 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:35.190290 | orchestrator | 2026-04-06 05:26:35.190302 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:26:35.190315 | orchestrator | Monday 06 April 2026 05:26:23 +0000 (0:00:00.243) 0:18:53.308 ********** 2026-04-06 05:26:35.190328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:26:35.190341 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:26:35.190354 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:26:35.190391 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.190405 | orchestrator | 2026-04-06 05:26:35.190418 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:26:35.190431 | orchestrator | Monday 06 April 2026 05:26:23 +0000 (0:00:00.402) 0:18:53.711 ********** 2026-04-06 05:26:35.190442 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:26:35.190453 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:26:35.190464 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:26:35.190474 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.190485 | orchestrator | 2026-04-06 05:26:35.190496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:26:35.190506 | orchestrator | Monday 06 April 2026 05:26:24 +0000 (0:00:00.452) 0:18:54.163 ********** 2026-04-06 05:26:35.190517 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:26:35.190528 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:26:35.190539 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:26:35.190549 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.190560 | orchestrator | 2026-04-06 05:26:35.190575 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:26:35.190592 | orchestrator | Monday 06 April 2026 05:26:24 +0000 (0:00:00.399) 0:18:54.563 ********** 2026-04-06 05:26:35.190610 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:35.190629 | orchestrator | 2026-04-06 05:26:35.190730 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:26:35.190747 | orchestrator | Monday 06 April 2026 05:26:25 +0000 (0:00:00.181) 0:18:54.745 ********** 2026-04-06 05:26:35.190758 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-06 05:26:35.190769 | orchestrator | 2026-04-06 05:26:35.190781 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:26:35.190792 | orchestrator | Monday 06 April 2026 05:26:25 +0000 (0:00:00.367) 0:18:55.112 ********** 2026-04-06 05:26:35.190824 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:26:35.190836 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:26:35.190846 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:26:35.190857 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:26:35.190868 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:26:35.190878 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-06 05:26:35.190889 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:26:35.190900 | orchestrator | 2026-04-06 05:26:35.190911 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:26:35.190922 | orchestrator | Monday 06 April 2026 05:26:26 +0000 (0:00:01.137) 0:18:56.250 ********** 2026-04-06 05:26:35.190932 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:26:35.190943 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:26:35.190953 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:26:35.190964 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:26:35.190975 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:26:35.190985 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-06 05:26:35.190996 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:26:35.191006 | orchestrator | 2026-04-06 05:26:35.191028 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-06 05:26:35.191039 | orchestrator | Monday 06 April 2026 05:26:28 +0000 (0:00:01.716) 0:18:57.966 ********** 2026-04-06 05:26:35.191050 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:35.191060 | orchestrator | 2026-04-06 05:26:35.191071 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-06 05:26:35.191082 | orchestrator | Monday 06 April 2026 05:26:28 +0000 (0:00:00.467) 0:18:58.433 ********** 2026-04-06 05:26:35.191092 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:35.191103 | orchestrator | 2026-04-06 05:26:35.191114 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-06 05:26:35.191124 | orchestrator | Monday 06 April 2026 05:26:29 +0000 (0:00:00.456) 0:18:58.890 ********** 2026-04-06 05:26:35.191135 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:35.191146 | orchestrator | 2026-04-06 05:26:35.191156 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-06 05:26:35.191167 | orchestrator | Monday 06 April 2026 05:26:29 +0000 (0:00:00.257) 0:18:59.147 ********** 2026-04-06 05:26:35.191177 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-06 05:26:35.191188 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-06 05:26:35.191199 | orchestrator | 2026-04-06 05:26:35.191210 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:26:35.191228 | orchestrator | Monday 06 April 2026 05:26:32 +0000 (0:00:03.045) 0:19:02.193 ********** 2026-04-06 05:26:35.191275 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-06 05:26:35.191287 | orchestrator | 2026-04-06 05:26:35.191298 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:26:35.191309 | orchestrator | Monday 06 April 2026 05:26:32 +0000 (0:00:00.200) 0:19:02.393 ********** 2026-04-06 05:26:35.191320 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-06 05:26:35.191331 | orchestrator | 2026-04-06 05:26:35.191341 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:26:35.191352 | orchestrator | Monday 06 April 2026 05:26:32 +0000 (0:00:00.208) 0:19:02.601 ********** 2026-04-06 05:26:35.191363 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.191374 | orchestrator | 2026-04-06 05:26:35.191385 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:26:35.191396 | orchestrator | Monday 06 April 2026 05:26:32 +0000 (0:00:00.113) 0:19:02.715 ********** 2026-04-06 05:26:35.191407 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:35.191417 | orchestrator | 2026-04-06 05:26:35.191428 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:26:35.191439 | orchestrator | Monday 06 April 2026 05:26:33 +0000 (0:00:00.487) 0:19:03.202 ********** 2026-04-06 05:26:35.191450 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:35.191460 | orchestrator | 2026-04-06 05:26:35.191471 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:26:35.191482 | orchestrator | Monday 06 April 2026 05:26:33 +0000 (0:00:00.500) 0:19:03.702 ********** 2026-04-06 05:26:35.191493 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:35.191504 | orchestrator | 2026-04-06 05:26:35.191515 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:26:35.191525 | orchestrator | Monday 06 April 2026 05:26:34 +0000 (0:00:00.518) 0:19:04.220 ********** 2026-04-06 05:26:35.191536 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.191547 | orchestrator | 2026-04-06 05:26:35.191558 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:26:35.191568 | orchestrator | Monday 06 April 2026 05:26:34 +0000 (0:00:00.128) 0:19:04.349 ********** 2026-04-06 05:26:35.191579 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.191590 | orchestrator | 2026-04-06 05:26:35.191601 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:26:35.191619 | orchestrator | Monday 06 April 2026 05:26:34 +0000 (0:00:00.134) 0:19:04.483 ********** 2026-04-06 05:26:35.191630 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:35.191641 | orchestrator | 2026-04-06 05:26:35.191659 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:26:46.096480 | orchestrator | Monday 06 April 2026 05:26:35 +0000 (0:00:00.412) 0:19:04.896 ********** 2026-04-06 05:26:46.096631 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.096658 | orchestrator | 2026-04-06 05:26:46.096677 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:26:46.096699 | orchestrator | Monday 06 April 2026 05:26:35 +0000 (0:00:00.546) 0:19:05.442 ********** 2026-04-06 05:26:46.096718 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.096736 | orchestrator | 2026-04-06 05:26:46.096754 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:26:46.096771 | orchestrator | Monday 06 April 2026 05:26:36 +0000 (0:00:00.525) 0:19:05.968 ********** 2026-04-06 05:26:46.096789 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.096807 | orchestrator | 2026-04-06 05:26:46.096824 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:26:46.096842 | orchestrator | Monday 06 April 2026 05:26:36 +0000 (0:00:00.130) 0:19:06.099 ********** 2026-04-06 05:26:46.096859 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.096876 | orchestrator | 2026-04-06 05:26:46.096893 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:26:46.096911 | orchestrator | Monday 06 April 2026 05:26:36 +0000 (0:00:00.145) 0:19:06.244 ********** 2026-04-06 05:26:46.096928 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.096946 | orchestrator | 2026-04-06 05:26:46.096963 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:26:46.096982 | orchestrator | Monday 06 April 2026 05:26:36 +0000 (0:00:00.148) 0:19:06.393 ********** 2026-04-06 05:26:46.097003 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.097023 | orchestrator | 2026-04-06 05:26:46.097042 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:26:46.097061 | orchestrator | Monday 06 April 2026 05:26:36 +0000 (0:00:00.165) 0:19:06.558 ********** 2026-04-06 05:26:46.097079 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.097098 | orchestrator | 2026-04-06 05:26:46.097116 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:26:46.097136 | orchestrator | Monday 06 April 2026 05:26:37 +0000 (0:00:00.163) 0:19:06.722 ********** 2026-04-06 05:26:46.097154 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.097173 | orchestrator | 2026-04-06 05:26:46.097192 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:26:46.097211 | orchestrator | Monday 06 April 2026 05:26:37 +0000 (0:00:00.132) 0:19:06.855 ********** 2026-04-06 05:26:46.097230 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.097278 | orchestrator | 2026-04-06 05:26:46.097298 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:26:46.097319 | orchestrator | Monday 06 April 2026 05:26:37 +0000 (0:00:00.141) 0:19:06.996 ********** 2026-04-06 05:26:46.097339 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.097356 | orchestrator | 2026-04-06 05:26:46.097373 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:26:46.097391 | orchestrator | Monday 06 April 2026 05:26:37 +0000 (0:00:00.148) 0:19:07.145 ********** 2026-04-06 05:26:46.097408 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.097426 | orchestrator | 2026-04-06 05:26:46.097443 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:26:46.097482 | orchestrator | Monday 06 April 2026 05:26:37 +0000 (0:00:00.151) 0:19:07.296 ********** 2026-04-06 05:26:46.097500 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.097517 | orchestrator | 2026-04-06 05:26:46.097534 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:26:46.097552 | orchestrator | Monday 06 April 2026 05:26:38 +0000 (0:00:00.529) 0:19:07.825 ********** 2026-04-06 05:26:46.097598 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.097617 | orchestrator | 2026-04-06 05:26:46.097635 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:26:46.097652 | orchestrator | Monday 06 April 2026 05:26:38 +0000 (0:00:00.134) 0:19:07.960 ********** 2026-04-06 05:26:46.097669 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.097687 | orchestrator | 2026-04-06 05:26:46.097705 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:26:46.097849 | orchestrator | Monday 06 April 2026 05:26:38 +0000 (0:00:00.131) 0:19:08.091 ********** 2026-04-06 05:26:46.097872 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.097890 | orchestrator | 2026-04-06 05:26:46.097908 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:26:46.097925 | orchestrator | Monday 06 April 2026 05:26:38 +0000 (0:00:00.118) 0:19:08.210 ********** 2026-04-06 05:26:46.097942 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.097959 | orchestrator | 2026-04-06 05:26:46.097977 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:26:46.097994 | orchestrator | Monday 06 April 2026 05:26:38 +0000 (0:00:00.134) 0:19:08.345 ********** 2026-04-06 05:26:46.098011 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.098107 | orchestrator | 2026-04-06 05:26:46.098127 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:26:46.098145 | orchestrator | Monday 06 April 2026 05:26:38 +0000 (0:00:00.136) 0:19:08.481 ********** 2026-04-06 05:26:46.098163 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.098180 | orchestrator | 2026-04-06 05:26:46.098196 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:26:46.098213 | orchestrator | Monday 06 April 2026 05:26:38 +0000 (0:00:00.130) 0:19:08.612 ********** 2026-04-06 05:26:46.098229 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.098279 | orchestrator | 2026-04-06 05:26:46.098297 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:26:46.098316 | orchestrator | Monday 06 April 2026 05:26:39 +0000 (0:00:00.124) 0:19:08.736 ********** 2026-04-06 05:26:46.098333 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.098350 | orchestrator | 2026-04-06 05:26:46.098368 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:26:46.098386 | orchestrator | Monday 06 April 2026 05:26:39 +0000 (0:00:00.138) 0:19:08.875 ********** 2026-04-06 05:26:46.098435 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.098453 | orchestrator | 2026-04-06 05:26:46.098471 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:26:46.098488 | orchestrator | Monday 06 April 2026 05:26:39 +0000 (0:00:00.134) 0:19:09.009 ********** 2026-04-06 05:26:46.098506 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.098525 | orchestrator | 2026-04-06 05:26:46.098541 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:26:46.098552 | orchestrator | Monday 06 April 2026 05:26:39 +0000 (0:00:00.131) 0:19:09.140 ********** 2026-04-06 05:26:46.098564 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.098575 | orchestrator | 2026-04-06 05:26:46.098586 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:26:46.098597 | orchestrator | Monday 06 April 2026 05:26:39 +0000 (0:00:00.134) 0:19:09.275 ********** 2026-04-06 05:26:46.098608 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.098619 | orchestrator | 2026-04-06 05:26:46.098630 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:26:46.098641 | orchestrator | Monday 06 April 2026 05:26:39 +0000 (0:00:00.194) 0:19:09.470 ********** 2026-04-06 05:26:46.098652 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.098664 | orchestrator | 2026-04-06 05:26:46.098675 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:26:46.098702 | orchestrator | Monday 06 April 2026 05:26:40 +0000 (0:00:01.232) 0:19:10.702 ********** 2026-04-06 05:26:46.098713 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.098724 | orchestrator | 2026-04-06 05:26:46.098735 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:26:46.098746 | orchestrator | Monday 06 April 2026 05:26:42 +0000 (0:00:01.229) 0:19:11.932 ********** 2026-04-06 05:26:46.098757 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-06 05:26:46.098770 | orchestrator | 2026-04-06 05:26:46.098781 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:26:46.098792 | orchestrator | Monday 06 April 2026 05:26:42 +0000 (0:00:00.223) 0:19:12.155 ********** 2026-04-06 05:26:46.098803 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.098814 | orchestrator | 2026-04-06 05:26:46.098825 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:26:46.098836 | orchestrator | Monday 06 April 2026 05:26:42 +0000 (0:00:00.151) 0:19:12.306 ********** 2026-04-06 05:26:46.098846 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.098857 | orchestrator | 2026-04-06 05:26:46.098868 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:26:46.098879 | orchestrator | Monday 06 April 2026 05:26:42 +0000 (0:00:00.137) 0:19:12.444 ********** 2026-04-06 05:26:46.098890 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:26:46.098902 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:26:46.098913 | orchestrator | 2026-04-06 05:26:46.098924 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:26:46.098935 | orchestrator | Monday 06 April 2026 05:26:43 +0000 (0:00:00.798) 0:19:13.242 ********** 2026-04-06 05:26:46.098946 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.098957 | orchestrator | 2026-04-06 05:26:46.098978 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:26:46.098989 | orchestrator | Monday 06 April 2026 05:26:44 +0000 (0:00:00.481) 0:19:13.724 ********** 2026-04-06 05:26:46.099000 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.099011 | orchestrator | 2026-04-06 05:26:46.099022 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:26:46.099033 | orchestrator | Monday 06 April 2026 05:26:44 +0000 (0:00:00.148) 0:19:13.872 ********** 2026-04-06 05:26:46.099044 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.099055 | orchestrator | 2026-04-06 05:26:46.099066 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:26:46.099077 | orchestrator | Monday 06 April 2026 05:26:44 +0000 (0:00:00.152) 0:19:14.025 ********** 2026-04-06 05:26:46.099088 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.099099 | orchestrator | 2026-04-06 05:26:46.099110 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:26:46.099121 | orchestrator | Monday 06 April 2026 05:26:44 +0000 (0:00:00.128) 0:19:14.153 ********** 2026-04-06 05:26:46.099131 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-06 05:26:46.099142 | orchestrator | 2026-04-06 05:26:46.099153 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:26:46.099164 | orchestrator | Monday 06 April 2026 05:26:44 +0000 (0:00:00.212) 0:19:14.366 ********** 2026-04-06 05:26:46.099175 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:26:46.099186 | orchestrator | 2026-04-06 05:26:46.099197 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:26:46.099209 | orchestrator | Monday 06 April 2026 05:26:45 +0000 (0:00:01.023) 0:19:15.389 ********** 2026-04-06 05:26:46.099220 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:26:46.099230 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:26:46.099302 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:26:46.099314 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.099325 | orchestrator | 2026-04-06 05:26:46.099335 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:26:46.099346 | orchestrator | Monday 06 April 2026 05:26:45 +0000 (0:00:00.177) 0:19:15.567 ********** 2026-04-06 05:26:46.099357 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:26:46.099368 | orchestrator | 2026-04-06 05:26:46.099378 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:26:46.099389 | orchestrator | Monday 06 April 2026 05:26:46 +0000 (0:00:00.161) 0:19:15.728 ********** 2026-04-06 05:26:46.099409 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.665132 | orchestrator | 2026-04-06 05:27:03.665284 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:27:03.665309 | orchestrator | Monday 06 April 2026 05:26:46 +0000 (0:00:00.171) 0:19:15.900 ********** 2026-04-06 05:27:03.665326 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.665344 | orchestrator | 2026-04-06 05:27:03.665360 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:27:03.665375 | orchestrator | Monday 06 April 2026 05:26:46 +0000 (0:00:00.150) 0:19:16.050 ********** 2026-04-06 05:27:03.665391 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.665407 | orchestrator | 2026-04-06 05:27:03.665423 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:27:03.665439 | orchestrator | Monday 06 April 2026 05:26:46 +0000 (0:00:00.148) 0:19:16.199 ********** 2026-04-06 05:27:03.665456 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.665472 | orchestrator | 2026-04-06 05:27:03.665488 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:27:03.665504 | orchestrator | Monday 06 April 2026 05:26:46 +0000 (0:00:00.165) 0:19:16.365 ********** 2026-04-06 05:27:03.665522 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:27:03.665540 | orchestrator | 2026-04-06 05:27:03.665556 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:27:03.665573 | orchestrator | Monday 06 April 2026 05:26:48 +0000 (0:00:01.481) 0:19:17.847 ********** 2026-04-06 05:27:03.665590 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:27:03.665608 | orchestrator | 2026-04-06 05:27:03.665626 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:27:03.665643 | orchestrator | Monday 06 April 2026 05:26:48 +0000 (0:00:00.143) 0:19:17.990 ********** 2026-04-06 05:27:03.665660 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-06 05:27:03.665679 | orchestrator | 2026-04-06 05:27:03.665695 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:27:03.665711 | orchestrator | Monday 06 April 2026 05:26:48 +0000 (0:00:00.210) 0:19:18.200 ********** 2026-04-06 05:27:03.665730 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.665748 | orchestrator | 2026-04-06 05:27:03.665766 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:27:03.665783 | orchestrator | Monday 06 April 2026 05:26:48 +0000 (0:00:00.172) 0:19:18.373 ********** 2026-04-06 05:27:03.665799 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.665816 | orchestrator | 2026-04-06 05:27:03.665833 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:27:03.665850 | orchestrator | Monday 06 April 2026 05:26:48 +0000 (0:00:00.151) 0:19:18.524 ********** 2026-04-06 05:27:03.665867 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.665884 | orchestrator | 2026-04-06 05:27:03.665903 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:27:03.665918 | orchestrator | Monday 06 April 2026 05:26:49 +0000 (0:00:00.456) 0:19:18.982 ********** 2026-04-06 05:27:03.665933 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.665950 | orchestrator | 2026-04-06 05:27:03.665968 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:27:03.666108 | orchestrator | Monday 06 April 2026 05:26:49 +0000 (0:00:00.152) 0:19:19.134 ********** 2026-04-06 05:27:03.666130 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.666146 | orchestrator | 2026-04-06 05:27:03.666162 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:27:03.666178 | orchestrator | Monday 06 April 2026 05:26:49 +0000 (0:00:00.145) 0:19:19.280 ********** 2026-04-06 05:27:03.666195 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.666211 | orchestrator | 2026-04-06 05:27:03.666227 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:27:03.666244 | orchestrator | Monday 06 April 2026 05:26:49 +0000 (0:00:00.150) 0:19:19.431 ********** 2026-04-06 05:27:03.666283 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.666300 | orchestrator | 2026-04-06 05:27:03.666315 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:27:03.666331 | orchestrator | Monday 06 April 2026 05:26:49 +0000 (0:00:00.150) 0:19:19.581 ********** 2026-04-06 05:27:03.666347 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.666365 | orchestrator | 2026-04-06 05:27:03.666382 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:27:03.666399 | orchestrator | Monday 06 April 2026 05:26:50 +0000 (0:00:00.144) 0:19:19.726 ********** 2026-04-06 05:27:03.666416 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:27:03.666432 | orchestrator | 2026-04-06 05:27:03.666448 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:27:03.666464 | orchestrator | Monday 06 April 2026 05:26:50 +0000 (0:00:00.248) 0:19:19.975 ********** 2026-04-06 05:27:03.666479 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-06 05:27:03.666497 | orchestrator | 2026-04-06 05:27:03.666514 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:27:03.666531 | orchestrator | Monday 06 April 2026 05:26:50 +0000 (0:00:00.198) 0:19:20.174 ********** 2026-04-06 05:27:03.666548 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-06 05:27:03.666564 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-06 05:27:03.666580 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-06 05:27:03.666596 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-06 05:27:03.666612 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-06 05:27:03.666627 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-06 05:27:03.666643 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-06 05:27:03.666659 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:27:03.666676 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:27:03.666719 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:27:03.666737 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:27:03.666754 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:27:03.666770 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:27:03.666785 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:27:03.666801 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-06 05:27:03.666816 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-06 05:27:03.666831 | orchestrator | 2026-04-06 05:27:03.666846 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:27:03.666862 | orchestrator | Monday 06 April 2026 05:26:55 +0000 (0:00:05.428) 0:19:25.602 ********** 2026-04-06 05:27:03.666877 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-06 05:27:03.666892 | orchestrator | 2026-04-06 05:27:03.666906 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-06 05:27:03.666934 | orchestrator | Monday 06 April 2026 05:26:56 +0000 (0:00:00.185) 0:19:25.787 ********** 2026-04-06 05:27:03.666949 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:27:03.666966 | orchestrator | 2026-04-06 05:27:03.666981 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-06 05:27:03.666996 | orchestrator | Monday 06 April 2026 05:26:56 +0000 (0:00:00.815) 0:19:26.603 ********** 2026-04-06 05:27:03.667012 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:27:03.667028 | orchestrator | 2026-04-06 05:27:03.667044 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:27:03.667059 | orchestrator | Monday 06 April 2026 05:26:57 +0000 (0:00:00.950) 0:19:27.553 ********** 2026-04-06 05:27:03.667074 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.667089 | orchestrator | 2026-04-06 05:27:03.667104 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:27:03.667119 | orchestrator | Monday 06 April 2026 05:26:57 +0000 (0:00:00.137) 0:19:27.690 ********** 2026-04-06 05:27:03.667134 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.667149 | orchestrator | 2026-04-06 05:27:03.667163 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:27:03.667180 | orchestrator | Monday 06 April 2026 05:26:58 +0000 (0:00:00.152) 0:19:27.843 ********** 2026-04-06 05:27:03.667197 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.667213 | orchestrator | 2026-04-06 05:27:03.667230 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:27:03.667247 | orchestrator | Monday 06 April 2026 05:26:58 +0000 (0:00:00.147) 0:19:27.991 ********** 2026-04-06 05:27:03.667303 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.667320 | orchestrator | 2026-04-06 05:27:03.667337 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:27:03.667363 | orchestrator | Monday 06 April 2026 05:26:58 +0000 (0:00:00.141) 0:19:28.132 ********** 2026-04-06 05:27:03.667378 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.667393 | orchestrator | 2026-04-06 05:27:03.667408 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:27:03.667425 | orchestrator | Monday 06 April 2026 05:26:58 +0000 (0:00:00.178) 0:19:28.311 ********** 2026-04-06 05:27:03.667442 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.667458 | orchestrator | 2026-04-06 05:27:03.667475 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:27:03.667492 | orchestrator | Monday 06 April 2026 05:26:58 +0000 (0:00:00.123) 0:19:28.434 ********** 2026-04-06 05:27:03.667509 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.667524 | orchestrator | 2026-04-06 05:27:03.667540 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:27:03.667556 | orchestrator | Monday 06 April 2026 05:26:58 +0000 (0:00:00.138) 0:19:28.573 ********** 2026-04-06 05:27:03.667571 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.667587 | orchestrator | 2026-04-06 05:27:03.667603 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:27:03.667617 | orchestrator | Monday 06 April 2026 05:26:58 +0000 (0:00:00.136) 0:19:28.709 ********** 2026-04-06 05:27:03.667631 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.667646 | orchestrator | 2026-04-06 05:27:03.667661 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:27:03.667676 | orchestrator | Monday 06 April 2026 05:26:59 +0000 (0:00:00.137) 0:19:28.847 ********** 2026-04-06 05:27:03.667690 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:03.667706 | orchestrator | 2026-04-06 05:27:03.667722 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:27:03.667751 | orchestrator | Monday 06 April 2026 05:26:59 +0000 (0:00:00.136) 0:19:28.983 ********** 2026-04-06 05:27:03.667768 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:27:03.667784 | orchestrator | 2026-04-06 05:27:03.667799 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:27:03.667815 | orchestrator | Monday 06 April 2026 05:26:59 +0000 (0:00:00.192) 0:19:29.175 ********** 2026-04-06 05:27:03.667830 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:27:03.667846 | orchestrator | 2026-04-06 05:27:03.667862 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:27:03.667878 | orchestrator | Monday 06 April 2026 05:27:03 +0000 (0:00:03.711) 0:19:32.886 ********** 2026-04-06 05:27:03.667907 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:27:24.070817 | orchestrator | 2026-04-06 05:27:24.070925 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:27:24.070941 | orchestrator | Monday 06 April 2026 05:27:04 +0000 (0:00:00.940) 0:19:33.826 ********** 2026-04-06 05:27:24.070954 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-06 05:27:24.070969 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-06 05:27:24.070980 | orchestrator | 2026-04-06 05:27:24.070991 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:27:24.071001 | orchestrator | Monday 06 April 2026 05:27:10 +0000 (0:00:06.552) 0:19:40.379 ********** 2026-04-06 05:27:24.071011 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.071022 | orchestrator | 2026-04-06 05:27:24.071032 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:27:24.071041 | orchestrator | Monday 06 April 2026 05:27:10 +0000 (0:00:00.163) 0:19:40.542 ********** 2026-04-06 05:27:24.071051 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.071060 | orchestrator | 2026-04-06 05:27:24.071070 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:27:24.071082 | orchestrator | Monday 06 April 2026 05:27:10 +0000 (0:00:00.125) 0:19:40.668 ********** 2026-04-06 05:27:24.071092 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.071101 | orchestrator | 2026-04-06 05:27:24.071111 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:27:24.071121 | orchestrator | Monday 06 April 2026 05:27:11 +0000 (0:00:00.171) 0:19:40.840 ********** 2026-04-06 05:27:24.071130 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.071140 | orchestrator | 2026-04-06 05:27:24.071150 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:27:24.071160 | orchestrator | Monday 06 April 2026 05:27:11 +0000 (0:00:00.156) 0:19:40.997 ********** 2026-04-06 05:27:24.071169 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.071179 | orchestrator | 2026-04-06 05:27:24.071189 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:27:24.071198 | orchestrator | Monday 06 April 2026 05:27:11 +0000 (0:00:00.154) 0:19:41.151 ********** 2026-04-06 05:27:24.071208 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:27:24.071219 | orchestrator | 2026-04-06 05:27:24.071243 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:27:24.071299 | orchestrator | Monday 06 April 2026 05:27:11 +0000 (0:00:00.250) 0:19:41.401 ********** 2026-04-06 05:27:24.071310 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:27:24.071319 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:27:24.071329 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:27:24.071339 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.071349 | orchestrator | 2026-04-06 05:27:24.071359 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:27:24.071370 | orchestrator | Monday 06 April 2026 05:27:12 +0000 (0:00:00.414) 0:19:41.816 ********** 2026-04-06 05:27:24.071381 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:27:24.071393 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:27:24.071404 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:27:24.071415 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.071427 | orchestrator | 2026-04-06 05:27:24.071438 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:27:24.071450 | orchestrator | Monday 06 April 2026 05:27:12 +0000 (0:00:00.455) 0:19:42.271 ********** 2026-04-06 05:27:24.071461 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:27:24.071474 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:27:24.071485 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:27:24.071497 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.071508 | orchestrator | 2026-04-06 05:27:24.071520 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:27:24.071531 | orchestrator | Monday 06 April 2026 05:27:12 +0000 (0:00:00.399) 0:19:42.671 ********** 2026-04-06 05:27:24.071543 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:27:24.071555 | orchestrator | 2026-04-06 05:27:24.071566 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:27:24.071578 | orchestrator | Monday 06 April 2026 05:27:13 +0000 (0:00:00.181) 0:19:42.852 ********** 2026-04-06 05:27:24.071590 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-06 05:27:24.071602 | orchestrator | 2026-04-06 05:27:24.071613 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:27:24.071624 | orchestrator | Monday 06 April 2026 05:27:14 +0000 (0:00:01.115) 0:19:43.968 ********** 2026-04-06 05:27:24.071636 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:27:24.071647 | orchestrator | 2026-04-06 05:27:24.071659 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-06 05:27:24.071670 | orchestrator | Monday 06 April 2026 05:27:15 +0000 (0:00:00.800) 0:19:44.769 ********** 2026-04-06 05:27:24.071682 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:27:24.071694 | orchestrator | 2026-04-06 05:27:24.071722 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-06 05:27:24.071732 | orchestrator | Monday 06 April 2026 05:27:15 +0000 (0:00:00.164) 0:19:44.934 ********** 2026-04-06 05:27:24.071742 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:27:24.071752 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:27:24.071761 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:27:24.071771 | orchestrator | 2026-04-06 05:27:24.071781 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-06 05:27:24.071790 | orchestrator | Monday 06 April 2026 05:27:15 +0000 (0:00:00.695) 0:19:45.629 ********** 2026-04-06 05:27:24.071800 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-04-06 05:27:24.071810 | orchestrator | 2026-04-06 05:27:24.071819 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-06 05:27:24.071829 | orchestrator | Monday 06 April 2026 05:27:16 +0000 (0:00:00.197) 0:19:45.826 ********** 2026-04-06 05:27:24.071846 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.071855 | orchestrator | 2026-04-06 05:27:24.071865 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-06 05:27:24.071875 | orchestrator | Monday 06 April 2026 05:27:16 +0000 (0:00:00.132) 0:19:45.959 ********** 2026-04-06 05:27:24.071884 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.071894 | orchestrator | 2026-04-06 05:27:24.071904 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-06 05:27:24.071913 | orchestrator | Monday 06 April 2026 05:27:16 +0000 (0:00:00.127) 0:19:46.087 ********** 2026-04-06 05:27:24.071923 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:27:24.071932 | orchestrator | 2026-04-06 05:27:24.071942 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-06 05:27:24.071952 | orchestrator | Monday 06 April 2026 05:27:16 +0000 (0:00:00.447) 0:19:46.534 ********** 2026-04-06 05:27:24.071961 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:27:24.071971 | orchestrator | 2026-04-06 05:27:24.071980 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-06 05:27:24.071990 | orchestrator | Monday 06 April 2026 05:27:16 +0000 (0:00:00.162) 0:19:46.696 ********** 2026-04-06 05:27:24.071999 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-06 05:27:24.072009 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-06 05:27:24.072019 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-06 05:27:24.072029 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-06 05:27:24.072038 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-06 05:27:24.072048 | orchestrator | 2026-04-06 05:27:24.072058 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-06 05:27:24.072072 | orchestrator | Monday 06 April 2026 05:27:18 +0000 (0:00:01.863) 0:19:48.560 ********** 2026-04-06 05:27:24.072082 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.072092 | orchestrator | 2026-04-06 05:27:24.072101 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-06 05:27:24.072111 | orchestrator | Monday 06 April 2026 05:27:18 +0000 (0:00:00.132) 0:19:48.692 ********** 2026-04-06 05:27:24.072120 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-04-06 05:27:24.072130 | orchestrator | 2026-04-06 05:27:24.072139 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-06 05:27:24.072149 | orchestrator | Monday 06 April 2026 05:27:19 +0000 (0:00:00.496) 0:19:49.189 ********** 2026-04-06 05:27:24.072158 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-06 05:27:24.072168 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-06 05:27:24.072178 | orchestrator | 2026-04-06 05:27:24.072187 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-06 05:27:24.072197 | orchestrator | Monday 06 April 2026 05:27:20 +0000 (0:00:00.850) 0:19:50.039 ********** 2026-04-06 05:27:24.072206 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:27:24.072216 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-06 05:27:24.072225 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:27:24.072235 | orchestrator | 2026-04-06 05:27:24.072245 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:27:24.072254 | orchestrator | Monday 06 April 2026 05:27:22 +0000 (0:00:02.252) 0:19:52.292 ********** 2026-04-06 05:27:24.072278 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-06 05:27:24.072288 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-06 05:27:24.072298 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:27:24.072307 | orchestrator | 2026-04-06 05:27:24.072317 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-06 05:27:24.072333 | orchestrator | Monday 06 April 2026 05:27:23 +0000 (0:00:00.960) 0:19:53.252 ********** 2026-04-06 05:27:24.072342 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.072352 | orchestrator | 2026-04-06 05:27:24.072362 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-06 05:27:24.072371 | orchestrator | Monday 06 April 2026 05:27:23 +0000 (0:00:00.248) 0:19:53.501 ********** 2026-04-06 05:27:24.072381 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.072390 | orchestrator | 2026-04-06 05:27:24.072400 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-06 05:27:24.072410 | orchestrator | Monday 06 April 2026 05:27:23 +0000 (0:00:00.147) 0:19:53.649 ********** 2026-04-06 05:27:24.072419 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:27:24.072429 | orchestrator | 2026-04-06 05:27:24.072444 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-06 05:29:21.513782 | orchestrator | Monday 06 April 2026 05:27:24 +0000 (0:00:00.133) 0:19:53.782 ********** 2026-04-06 05:29:21.513896 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-04-06 05:29:21.513912 | orchestrator | 2026-04-06 05:29:21.513925 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-06 05:29:21.513936 | orchestrator | Monday 06 April 2026 05:27:24 +0000 (0:00:00.210) 0:19:53.993 ********** 2026-04-06 05:29:21.513947 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:29:21.513960 | orchestrator | 2026-04-06 05:29:21.513971 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-06 05:29:21.513982 | orchestrator | Monday 06 April 2026 05:27:24 +0000 (0:00:00.455) 0:19:54.448 ********** 2026-04-06 05:29:21.513993 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:29:21.514004 | orchestrator | 2026-04-06 05:29:21.514073 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-06 05:29:21.514087 | orchestrator | Monday 06 April 2026 05:27:27 +0000 (0:00:02.313) 0:19:56.761 ********** 2026-04-06 05:29:21.514098 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-04-06 05:29:21.514109 | orchestrator | 2026-04-06 05:29:21.514120 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-06 05:29:21.514132 | orchestrator | Monday 06 April 2026 05:27:27 +0000 (0:00:00.480) 0:19:57.241 ********** 2026-04-06 05:29:21.514143 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:29:21.514154 | orchestrator | 2026-04-06 05:29:21.514165 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-06 05:29:21.514177 | orchestrator | Monday 06 April 2026 05:27:28 +0000 (0:00:00.995) 0:19:58.237 ********** 2026-04-06 05:29:21.514188 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:29:21.514199 | orchestrator | 2026-04-06 05:29:21.514210 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-06 05:29:21.514221 | orchestrator | Monday 06 April 2026 05:27:29 +0000 (0:00:00.926) 0:19:59.164 ********** 2026-04-06 05:29:21.514232 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:29:21.514243 | orchestrator | 2026-04-06 05:29:21.514255 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-06 05:29:21.514266 | orchestrator | Monday 06 April 2026 05:27:30 +0000 (0:00:01.240) 0:20:00.405 ********** 2026-04-06 05:29:21.514277 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.514290 | orchestrator | 2026-04-06 05:29:21.514301 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-06 05:29:21.514312 | orchestrator | Monday 06 April 2026 05:27:30 +0000 (0:00:00.161) 0:20:00.566 ********** 2026-04-06 05:29:21.514360 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.514375 | orchestrator | 2026-04-06 05:29:21.514388 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-06 05:29:21.514401 | orchestrator | Monday 06 April 2026 05:27:31 +0000 (0:00:00.152) 0:20:00.719 ********** 2026-04-06 05:29:21.514414 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-06 05:29:21.514452 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-06 05:29:21.514466 | orchestrator | 2026-04-06 05:29:21.514494 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-06 05:29:21.514507 | orchestrator | Monday 06 April 2026 05:27:31 +0000 (0:00:00.830) 0:20:01.550 ********** 2026-04-06 05:29:21.514520 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-06 05:29:21.514533 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-06 05:29:21.514545 | orchestrator | 2026-04-06 05:29:21.514558 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-06 05:29:21.514571 | orchestrator | Monday 06 April 2026 05:27:33 +0000 (0:00:01.824) 0:20:03.374 ********** 2026-04-06 05:29:21.514584 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-06 05:29:21.514597 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-06 05:29:21.514610 | orchestrator | 2026-04-06 05:29:21.514622 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-06 05:29:21.514635 | orchestrator | Monday 06 April 2026 05:27:37 +0000 (0:00:03.516) 0:20:06.890 ********** 2026-04-06 05:29:21.514648 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.514660 | orchestrator | 2026-04-06 05:29:21.514674 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-06 05:29:21.514687 | orchestrator | Monday 06 April 2026 05:27:37 +0000 (0:00:00.244) 0:20:07.135 ********** 2026-04-06 05:29:21.514698 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-06 05:29:21.514709 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:29:21.514721 | orchestrator | 2026-04-06 05:29:21.514731 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-06 05:29:21.514742 | orchestrator | Monday 06 April 2026 05:27:49 +0000 (0:00:12.256) 0:20:19.392 ********** 2026-04-06 05:29:21.514753 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.514764 | orchestrator | 2026-04-06 05:29:21.514775 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-06 05:29:21.514786 | orchestrator | Monday 06 April 2026 05:27:49 +0000 (0:00:00.317) 0:20:19.710 ********** 2026-04-06 05:29:21.514798 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.514809 | orchestrator | 2026-04-06 05:29:21.514820 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-06 05:29:21.514831 | orchestrator | Monday 06 April 2026 05:27:50 +0000 (0:00:00.504) 0:20:20.214 ********** 2026-04-06 05:29:21.514842 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.514853 | orchestrator | 2026-04-06 05:29:21.514864 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-06 05:29:21.514875 | orchestrator | Monday 06 April 2026 05:27:50 +0000 (0:00:00.115) 0:20:20.329 ********** 2026-04-06 05:29:21.514886 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-06 05:29:21.514897 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-06 05:29:21.514926 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:29:21.514937 | orchestrator | 2026-04-06 05:29:21.514948 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-06 05:29:21.514959 | orchestrator | Monday 06 April 2026 05:27:57 +0000 (0:00:07.337) 0:20:27.667 ********** 2026-04-06 05:29:21.514970 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.514981 | orchestrator | 2026-04-06 05:29:21.514992 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-06 05:29:21.515003 | orchestrator | Monday 06 April 2026 05:27:58 +0000 (0:00:00.150) 0:20:27.818 ********** 2026-04-06 05:29:21.515014 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.515024 | orchestrator | 2026-04-06 05:29:21.515035 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-06 05:29:21.515046 | orchestrator | Monday 06 April 2026 05:27:58 +0000 (0:00:00.138) 0:20:27.957 ********** 2026-04-06 05:29:21.515066 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.515077 | orchestrator | 2026-04-06 05:29:21.515088 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-06 05:29:21.515099 | orchestrator | Monday 06 April 2026 05:27:58 +0000 (0:00:00.129) 0:20:28.086 ********** 2026-04-06 05:29:21.515110 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.515121 | orchestrator | 2026-04-06 05:29:21.515132 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-06 05:29:21.515142 | orchestrator | Monday 06 April 2026 05:27:58 +0000 (0:00:00.123) 0:20:28.209 ********** 2026-04-06 05:29:21.515153 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.515164 | orchestrator | 2026-04-06 05:29:21.515175 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-06 05:29:21.515186 | orchestrator | Monday 06 April 2026 05:27:58 +0000 (0:00:00.130) 0:20:28.340 ********** 2026-04-06 05:29:21.515197 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.515207 | orchestrator | 2026-04-06 05:29:21.515218 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-06 05:29:21.515229 | orchestrator | Monday 06 April 2026 05:27:58 +0000 (0:00:00.149) 0:20:28.490 ********** 2026-04-06 05:29:21.515240 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:29:21.515250 | orchestrator | 2026-04-06 05:29:21.515261 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-04-06 05:29:21.515272 | orchestrator | 2026-04-06 05:29:21.515283 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:29:21.515294 | orchestrator | Monday 06 April 2026 05:27:59 +0000 (0:00:00.778) 0:20:29.268 ********** 2026-04-06 05:29:21.515305 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:29:21.515316 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:29:21.515349 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:29:21.515361 | orchestrator | 2026-04-06 05:29:21.515372 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:29:21.515383 | orchestrator | Monday 06 April 2026 05:28:00 +0000 (0:00:01.069) 0:20:30.338 ********** 2026-04-06 05:29:21.515394 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:29:21.515405 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:29:21.515415 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:29:21.515427 | orchestrator | 2026-04-06 05:29:21.515443 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-04-06 05:29:21.515454 | orchestrator | Monday 06 April 2026 05:28:01 +0000 (0:00:00.560) 0:20:30.898 ********** 2026-04-06 05:29:21.515465 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-06 05:29:21.515477 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-06 05:29:21.515488 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-06 05:29:21.515499 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-06 05:29:21.515511 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-06 05:29:21.515522 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-06 05:29:21.515533 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-06 05:29:21.515544 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-06 05:29:21.515555 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-06 05:29:21.515566 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-06 05:29:21.515584 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-06 05:29:21.515595 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-06 05:29:21.515606 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-06 05:29:21.515617 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-06 05:29:21.515628 | orchestrator | 2026-04-06 05:29:21.515638 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-04-06 05:29:21.515649 | orchestrator | Monday 06 April 2026 05:29:16 +0000 (0:01:15.577) 0:21:46.476 ********** 2026-04-06 05:29:21.515660 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-06 05:29:21.515671 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-06 05:29:21.515682 | orchestrator | 2026-04-06 05:29:21.515700 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-04-06 05:29:32.491464 | orchestrator | Monday 06 April 2026 05:29:21 +0000 (0:00:04.745) 0:21:51.222 ********** 2026-04-06 05:29:32.491616 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:29:32.491636 | orchestrator | 2026-04-06 05:29:32.491686 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-04-06 05:29:32.491699 | orchestrator | 2026-04-06 05:29:32.491711 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:29:32.491723 | orchestrator | Monday 06 April 2026 05:29:24 +0000 (0:00:02.623) 0:21:53.845 ********** 2026-04-06 05:29:32.491735 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-06 05:29:32.491746 | orchestrator | 2026-04-06 05:29:32.491758 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:29:32.491769 | orchestrator | Monday 06 April 2026 05:29:24 +0000 (0:00:00.561) 0:21:54.407 ********** 2026-04-06 05:29:32.491780 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:32.491793 | orchestrator | 2026-04-06 05:29:32.491804 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:29:32.491815 | orchestrator | Monday 06 April 2026 05:29:25 +0000 (0:00:00.539) 0:21:54.946 ********** 2026-04-06 05:29:32.491826 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:32.491836 | orchestrator | 2026-04-06 05:29:32.491847 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:29:32.491858 | orchestrator | Monday 06 April 2026 05:29:25 +0000 (0:00:00.151) 0:21:55.098 ********** 2026-04-06 05:29:32.491869 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:32.491880 | orchestrator | 2026-04-06 05:29:32.491891 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:29:32.491902 | orchestrator | Monday 06 April 2026 05:29:25 +0000 (0:00:00.456) 0:21:55.555 ********** 2026-04-06 05:29:32.491913 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:32.491924 | orchestrator | 2026-04-06 05:29:32.491935 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:29:32.491946 | orchestrator | Monday 06 April 2026 05:29:26 +0000 (0:00:00.163) 0:21:55.719 ********** 2026-04-06 05:29:32.491957 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:32.491968 | orchestrator | 2026-04-06 05:29:32.491979 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:29:32.491990 | orchestrator | Monday 06 April 2026 05:29:26 +0000 (0:00:00.159) 0:21:55.878 ********** 2026-04-06 05:29:32.492000 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:32.492011 | orchestrator | 2026-04-06 05:29:32.492023 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:29:32.492034 | orchestrator | Monday 06 April 2026 05:29:26 +0000 (0:00:00.195) 0:21:56.074 ********** 2026-04-06 05:29:32.492045 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:32.492056 | orchestrator | 2026-04-06 05:29:32.492067 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:29:32.492103 | orchestrator | Monday 06 April 2026 05:29:26 +0000 (0:00:00.168) 0:21:56.242 ********** 2026-04-06 05:29:32.492130 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:32.492141 | orchestrator | 2026-04-06 05:29:32.492152 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:29:32.492163 | orchestrator | Monday 06 April 2026 05:29:26 +0000 (0:00:00.152) 0:21:56.395 ********** 2026-04-06 05:29:32.492174 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:29:32.492185 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:29:32.492195 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:29:32.492206 | orchestrator | 2026-04-06 05:29:32.492217 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:29:32.492228 | orchestrator | Monday 06 April 2026 05:29:27 +0000 (0:00:01.033) 0:21:57.428 ********** 2026-04-06 05:29:32.492239 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:32.492250 | orchestrator | 2026-04-06 05:29:32.492260 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:29:32.492271 | orchestrator | Monday 06 April 2026 05:29:27 +0000 (0:00:00.243) 0:21:57.672 ********** 2026-04-06 05:29:32.492282 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:29:32.492293 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:29:32.492304 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:29:32.492315 | orchestrator | 2026-04-06 05:29:32.492326 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:29:32.492362 | orchestrator | Monday 06 April 2026 05:29:30 +0000 (0:00:02.335) 0:22:00.007 ********** 2026-04-06 05:29:32.492373 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 05:29:32.492384 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 05:29:32.492395 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 05:29:32.492406 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:32.492417 | orchestrator | 2026-04-06 05:29:32.492428 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:29:32.492439 | orchestrator | Monday 06 April 2026 05:29:31 +0000 (0:00:01.179) 0:22:01.187 ********** 2026-04-06 05:29:32.492451 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:29:32.492465 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:29:32.492496 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:29:32.492508 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:32.492519 | orchestrator | 2026-04-06 05:29:32.492530 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:29:32.492541 | orchestrator | Monday 06 April 2026 05:29:32 +0000 (0:00:00.664) 0:22:01.852 ********** 2026-04-06 05:29:32.492555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:32.492580 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:32.492592 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:32.492603 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:32.492614 | orchestrator | 2026-04-06 05:29:32.492625 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:29:32.492636 | orchestrator | Monday 06 April 2026 05:29:32 +0000 (0:00:00.201) 0:22:02.053 ********** 2026-04-06 05:29:32.492655 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:29:28.889990', 'end': '2026-04-06 05:29:28.942288', 'delta': '0:00:00.052298', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:29:32.492671 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:29:29.511958', 'end': '2026-04-06 05:29:29.577188', 'delta': '0:00:00.065230', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:29:32.492690 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:29:30.104233', 'end': '2026-04-06 05:29:30.143390', 'delta': '0:00:00.039157', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:29:36.355908 | orchestrator | 2026-04-06 05:29:36.356019 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:29:36.356042 | orchestrator | Monday 06 April 2026 05:29:32 +0000 (0:00:00.240) 0:22:02.294 ********** 2026-04-06 05:29:36.356053 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:36.356063 | orchestrator | 2026-04-06 05:29:36.356072 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:29:36.356081 | orchestrator | Monday 06 April 2026 05:29:32 +0000 (0:00:00.277) 0:22:02.571 ********** 2026-04-06 05:29:36.356112 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.356123 | orchestrator | 2026-04-06 05:29:36.356132 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:29:36.356141 | orchestrator | Monday 06 April 2026 05:29:33 +0000 (0:00:00.290) 0:22:02.862 ********** 2026-04-06 05:29:36.356150 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:36.356159 | orchestrator | 2026-04-06 05:29:36.356167 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:29:36.356176 | orchestrator | Monday 06 April 2026 05:29:33 +0000 (0:00:00.145) 0:22:03.008 ********** 2026-04-06 05:29:36.356185 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:36.356193 | orchestrator | 2026-04-06 05:29:36.356202 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:29:36.356211 | orchestrator | Monday 06 April 2026 05:29:34 +0000 (0:00:00.994) 0:22:04.002 ********** 2026-04-06 05:29:36.356219 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:29:36.356228 | orchestrator | 2026-04-06 05:29:36.356237 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:29:36.356245 | orchestrator | Monday 06 April 2026 05:29:34 +0000 (0:00:00.152) 0:22:04.154 ********** 2026-04-06 05:29:36.356254 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.356263 | orchestrator | 2026-04-06 05:29:36.356271 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:29:36.356280 | orchestrator | Monday 06 April 2026 05:29:34 +0000 (0:00:00.156) 0:22:04.310 ********** 2026-04-06 05:29:36.356289 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.356297 | orchestrator | 2026-04-06 05:29:36.356306 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:29:36.356315 | orchestrator | Monday 06 April 2026 05:29:34 +0000 (0:00:00.262) 0:22:04.573 ********** 2026-04-06 05:29:36.356323 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.356376 | orchestrator | 2026-04-06 05:29:36.356385 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:29:36.356394 | orchestrator | Monday 06 April 2026 05:29:34 +0000 (0:00:00.138) 0:22:04.711 ********** 2026-04-06 05:29:36.356403 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.356412 | orchestrator | 2026-04-06 05:29:36.356420 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:29:36.356429 | orchestrator | Monday 06 April 2026 05:29:35 +0000 (0:00:00.142) 0:22:04.853 ********** 2026-04-06 05:29:36.356452 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.356464 | orchestrator | 2026-04-06 05:29:36.356474 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:29:36.356484 | orchestrator | Monday 06 April 2026 05:29:35 +0000 (0:00:00.441) 0:22:05.294 ********** 2026-04-06 05:29:36.356494 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.356504 | orchestrator | 2026-04-06 05:29:36.356514 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:29:36.356524 | orchestrator | Monday 06 April 2026 05:29:35 +0000 (0:00:00.152) 0:22:05.447 ********** 2026-04-06 05:29:36.356534 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.356545 | orchestrator | 2026-04-06 05:29:36.356555 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:29:36.356565 | orchestrator | Monday 06 April 2026 05:29:35 +0000 (0:00:00.152) 0:22:05.599 ********** 2026-04-06 05:29:36.356575 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.356585 | orchestrator | 2026-04-06 05:29:36.356595 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:29:36.356607 | orchestrator | Monday 06 April 2026 05:29:36 +0000 (0:00:00.133) 0:22:05.733 ********** 2026-04-06 05:29:36.356617 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.356627 | orchestrator | 2026-04-06 05:29:36.356637 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:29:36.356647 | orchestrator | Monday 06 April 2026 05:29:36 +0000 (0:00:00.154) 0:22:05.888 ********** 2026-04-06 05:29:36.356666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:29:36.356679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:29:36.356707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:29:36.356721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:29:36.356733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:29:36.356744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:29:36.356760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:29:36.356783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23f8d4f9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:29:36.656056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:29:36.656161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:29:36.656177 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:29:36.656191 | orchestrator | 2026-04-06 05:29:36.656204 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:29:36.656215 | orchestrator | Monday 06 April 2026 05:29:36 +0000 (0:00:00.312) 0:22:06.200 ********** 2026-04-06 05:29:36.656229 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:36.656259 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:36.656272 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:36.656306 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:36.656382 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:36.656397 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:36.656409 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:36.656435 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '23f8d4f9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1', 'scsi-SQEMU_QEMU_HARDDISK_23f8d4f9-bada-4d0a-9690-8d695318e058-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:29:36.656468 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:06.300716 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:06.300772 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:30:06.300779 | orchestrator | 2026-04-06 05:30:06.300784 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:30:06.300789 | orchestrator | Monday 06 April 2026 05:29:36 +0000 (0:00:00.287) 0:22:06.487 ********** 2026-04-06 05:30:06.300793 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:30:06.300797 | orchestrator | 2026-04-06 05:30:06.300802 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:30:06.300805 | orchestrator | Monday 06 April 2026 05:29:37 +0000 (0:00:00.475) 0:22:06.963 ********** 2026-04-06 05:30:06.300809 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:30:06.300814 | orchestrator | 2026-04-06 05:30:06.300818 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:30:06.300822 | orchestrator | Monday 06 April 2026 05:29:37 +0000 (0:00:00.144) 0:22:07.108 ********** 2026-04-06 05:30:06.300825 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:30:06.300829 | orchestrator | 2026-04-06 05:30:06.300833 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:30:06.300837 | orchestrator | Monday 06 April 2026 05:29:37 +0000 (0:00:00.484) 0:22:07.593 ********** 2026-04-06 05:30:06.300851 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:30:06.300855 | orchestrator | 2026-04-06 05:30:06.300859 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:30:06.300863 | orchestrator | Monday 06 April 2026 05:29:38 +0000 (0:00:00.147) 0:22:07.740 ********** 2026-04-06 05:30:06.300867 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:30:06.300870 | orchestrator | 2026-04-06 05:30:06.300880 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:30:06.300885 | orchestrator | Monday 06 April 2026 05:29:38 +0000 (0:00:00.266) 0:22:08.007 ********** 2026-04-06 05:30:06.300888 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:30:06.300892 | orchestrator | 2026-04-06 05:30:06.300896 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:30:06.300900 | orchestrator | Monday 06 April 2026 05:29:38 +0000 (0:00:00.159) 0:22:08.167 ********** 2026-04-06 05:30:06.300904 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:30:06.300908 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-06 05:30:06.300912 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-06 05:30:06.300916 | orchestrator | 2026-04-06 05:30:06.300919 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:30:06.300923 | orchestrator | Monday 06 April 2026 05:29:39 +0000 (0:00:01.369) 0:22:09.536 ********** 2026-04-06 05:30:06.300927 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-06 05:30:06.300931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-06 05:30:06.300935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-06 05:30:06.300939 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:30:06.300943 | orchestrator | 2026-04-06 05:30:06.300947 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:30:06.300951 | orchestrator | Monday 06 April 2026 05:29:40 +0000 (0:00:00.203) 0:22:09.740 ********** 2026-04-06 05:30:06.300954 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:30:06.300958 | orchestrator | 2026-04-06 05:30:06.300962 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:30:06.300966 | orchestrator | Monday 06 April 2026 05:29:40 +0000 (0:00:00.133) 0:22:09.873 ********** 2026-04-06 05:30:06.300970 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:30:06.300974 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:30:06.300978 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:30:06.300982 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:30:06.300986 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:30:06.300989 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:30:06.300993 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:30:06.300997 | orchestrator | 2026-04-06 05:30:06.301001 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:30:06.301005 | orchestrator | Monday 06 April 2026 05:29:41 +0000 (0:00:00.940) 0:22:10.814 ********** 2026-04-06 05:30:06.301008 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 05:30:06.301012 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:30:06.301016 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:30:06.301020 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:30:06.301031 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:30:06.301035 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:30:06.301042 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:30:06.301046 | orchestrator | 2026-04-06 05:30:06.301050 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-04-06 05:30:06.301054 | orchestrator | Monday 06 April 2026 05:29:42 +0000 (0:00:01.696) 0:22:12.511 ********** 2026-04-06 05:30:06.301058 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:30:06.301061 | orchestrator | 2026-04-06 05:30:06.301065 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-04-06 05:30:06.301069 | orchestrator | Monday 06 April 2026 05:29:45 +0000 (0:00:02.227) 0:22:14.738 ********** 2026-04-06 05:30:06.301073 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:30:06.301077 | orchestrator | 2026-04-06 05:30:06.301081 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-04-06 05:30:06.301084 | orchestrator | Monday 06 April 2026 05:29:46 +0000 (0:00:01.941) 0:22:16.679 ********** 2026-04-06 05:30:06.301088 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:30:06.301092 | orchestrator | 2026-04-06 05:30:06.301096 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-04-06 05:30:06.301100 | orchestrator | Monday 06 April 2026 05:29:48 +0000 (0:00:01.152) 0:22:17.831 ********** 2026-04-06 05:30:06.301107 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4770', 'value': {'gid': 4770, 'name': 'testbed-node-3', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.13:6817/3800011736', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.13:6816', 'nonce': 3800011736}, {'type': 'v1', 'addr': '192.168.16.13:6817', 'nonce': 3800011736}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-04-06 05:30:06.301112 | orchestrator | 2026-04-06 05:30:06.301116 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-04-06 05:30:06.301120 | orchestrator | Monday 06 April 2026 05:29:48 +0000 (0:00:00.170) 0:22:18.001 ********** 2026-04-06 05:30:06.301123 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-3) 2026-04-06 05:30:06.301127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-06 05:30:06.301131 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-06 05:30:06.301135 | orchestrator | 2026-04-06 05:30:06.301139 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-04-06 05:30:06.301142 | orchestrator | Monday 06 April 2026 05:29:49 +0000 (0:00:00.881) 0:22:18.883 ********** 2026-04-06 05:30:06.301146 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-04-06 05:30:06.301150 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-04-06 05:30:06.301154 | orchestrator | 2026-04-06 05:30:06.301157 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-04-06 05:30:06.301161 | orchestrator | Monday 06 April 2026 05:29:49 +0000 (0:00:00.810) 0:22:19.694 ********** 2026-04-06 05:30:06.301165 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:30:06.301169 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:30:06.301173 | orchestrator | 2026-04-06 05:30:06.301177 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-04-06 05:30:06.301180 | orchestrator | Monday 06 April 2026 05:30:00 +0000 (0:00:10.288) 0:22:29.983 ********** 2026-04-06 05:30:06.301184 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:30:06.301188 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:30:06.301195 | orchestrator | 2026-04-06 05:30:06.301199 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-04-06 05:30:06.301202 | orchestrator | Monday 06 April 2026 05:30:03 +0000 (0:00:03.465) 0:22:33.449 ********** 2026-04-06 05:30:06.301206 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:30:06.301210 | orchestrator | 2026-04-06 05:30:06.301214 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-04-06 05:30:06.301218 | orchestrator | Monday 06 April 2026 05:30:04 +0000 (0:00:01.219) 0:22:34.669 ********** 2026-04-06 05:30:06.301221 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:30:06.301225 | orchestrator | 2026-04-06 05:30:06.301229 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-04-06 05:30:06.301233 | orchestrator | 2026-04-06 05:30:06.301237 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:30:06.301240 | orchestrator | Monday 06 April 2026 05:30:05 +0000 (0:00:00.772) 0:22:35.441 ********** 2026-04-06 05:30:06.301244 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-06 05:30:06.301248 | orchestrator | 2026-04-06 05:30:06.301254 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:30:06.301260 | orchestrator | Monday 06 April 2026 05:30:05 +0000 (0:00:00.213) 0:22:35.655 ********** 2026-04-06 05:30:06.301270 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:12.216840 | orchestrator | 2026-04-06 05:30:12.216925 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:30:12.216947 | orchestrator | Monday 06 April 2026 05:30:06 +0000 (0:00:00.440) 0:22:36.096 ********** 2026-04-06 05:30:12.216963 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:12.216976 | orchestrator | 2026-04-06 05:30:12.216984 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:30:12.216992 | orchestrator | Monday 06 April 2026 05:30:06 +0000 (0:00:00.121) 0:22:36.217 ********** 2026-04-06 05:30:12.217000 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:12.217009 | orchestrator | 2026-04-06 05:30:12.217017 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:30:12.217025 | orchestrator | Monday 06 April 2026 05:30:06 +0000 (0:00:00.441) 0:22:36.658 ********** 2026-04-06 05:30:12.217033 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:12.217041 | orchestrator | 2026-04-06 05:30:12.217049 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:30:12.217057 | orchestrator | Monday 06 April 2026 05:30:07 +0000 (0:00:00.143) 0:22:36.802 ********** 2026-04-06 05:30:12.217065 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:12.217074 | orchestrator | 2026-04-06 05:30:12.217083 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:30:12.217092 | orchestrator | Monday 06 April 2026 05:30:07 +0000 (0:00:00.120) 0:22:36.923 ********** 2026-04-06 05:30:12.217101 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:12.217109 | orchestrator | 2026-04-06 05:30:12.217118 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:30:12.217127 | orchestrator | Monday 06 April 2026 05:30:07 +0000 (0:00:00.147) 0:22:37.070 ********** 2026-04-06 05:30:12.217136 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:12.217146 | orchestrator | 2026-04-06 05:30:12.217155 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:30:12.217163 | orchestrator | Monday 06 April 2026 05:30:07 +0000 (0:00:00.136) 0:22:37.207 ********** 2026-04-06 05:30:12.217172 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:12.217181 | orchestrator | 2026-04-06 05:30:12.217190 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:30:12.217199 | orchestrator | Monday 06 April 2026 05:30:07 +0000 (0:00:00.332) 0:22:37.539 ********** 2026-04-06 05:30:12.217208 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:30:12.217229 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:30:12.217255 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:30:12.217264 | orchestrator | 2026-04-06 05:30:12.217273 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:30:12.217281 | orchestrator | Monday 06 April 2026 05:30:08 +0000 (0:00:00.646) 0:22:38.186 ********** 2026-04-06 05:30:12.217290 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:12.217298 | orchestrator | 2026-04-06 05:30:12.217307 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:30:12.217316 | orchestrator | Monday 06 April 2026 05:30:08 +0000 (0:00:00.231) 0:22:38.418 ********** 2026-04-06 05:30:12.217324 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:30:12.217333 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:30:12.217390 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:30:12.217400 | orchestrator | 2026-04-06 05:30:12.217409 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:30:12.217420 | orchestrator | Monday 06 April 2026 05:30:10 +0000 (0:00:01.764) 0:22:40.182 ********** 2026-04-06 05:30:12.217430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 05:30:12.217440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 05:30:12.217450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 05:30:12.217460 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:12.217470 | orchestrator | 2026-04-06 05:30:12.217480 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:30:12.217490 | orchestrator | Monday 06 April 2026 05:30:10 +0000 (0:00:00.386) 0:22:40.568 ********** 2026-04-06 05:30:12.217501 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:30:12.217514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:30:12.217524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:30:12.217535 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:12.217545 | orchestrator | 2026-04-06 05:30:12.217555 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:30:12.217565 | orchestrator | Monday 06 April 2026 05:30:11 +0000 (0:00:00.584) 0:22:41.152 ********** 2026-04-06 05:30:12.217591 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:12.217604 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:12.217615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:12.217632 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:12.217643 | orchestrator | 2026-04-06 05:30:12.217653 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:30:12.217663 | orchestrator | Monday 06 April 2026 05:30:11 +0000 (0:00:00.151) 0:22:41.304 ********** 2026-04-06 05:30:12.217682 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:30:09.218444', 'end': '2026-04-06 05:30:09.278121', 'delta': '0:00:00.059677', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:30:12.217703 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:30:09.759891', 'end': '2026-04-06 05:30:09.812638', 'delta': '0:00:00.052747', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:30:12.217720 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:30:10.275794', 'end': '2026-04-06 05:30:10.328756', 'delta': '0:00:00.052962', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:30:12.217737 | orchestrator | 2026-04-06 05:30:12.217753 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:30:12.217768 | orchestrator | Monday 06 April 2026 05:30:11 +0000 (0:00:00.166) 0:22:41.471 ********** 2026-04-06 05:30:12.217783 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:12.217799 | orchestrator | 2026-04-06 05:30:12.217815 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:30:12.217830 | orchestrator | Monday 06 April 2026 05:30:12 +0000 (0:00:00.247) 0:22:41.718 ********** 2026-04-06 05:30:12.217845 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:12.217854 | orchestrator | 2026-04-06 05:30:12.217863 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:30:12.217879 | orchestrator | Monday 06 April 2026 05:30:12 +0000 (0:00:00.211) 0:22:41.929 ********** 2026-04-06 05:30:15.819034 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:15.819137 | orchestrator | 2026-04-06 05:30:15.819152 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:30:15.819189 | orchestrator | Monday 06 April 2026 05:30:12 +0000 (0:00:00.134) 0:22:42.064 ********** 2026-04-06 05:30:15.819201 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:30:15.819212 | orchestrator | 2026-04-06 05:30:15.819224 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:30:15.819235 | orchestrator | Monday 06 April 2026 05:30:13 +0000 (0:00:00.940) 0:22:43.004 ********** 2026-04-06 05:30:15.819246 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:15.819257 | orchestrator | 2026-04-06 05:30:15.819268 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:30:15.819279 | orchestrator | Monday 06 April 2026 05:30:13 +0000 (0:00:00.150) 0:22:43.155 ********** 2026-04-06 05:30:15.819290 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:15.819301 | orchestrator | 2026-04-06 05:30:15.819312 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:30:15.819323 | orchestrator | Monday 06 April 2026 05:30:13 +0000 (0:00:00.166) 0:22:43.321 ********** 2026-04-06 05:30:15.819334 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:15.819409 | orchestrator | 2026-04-06 05:30:15.819421 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:30:15.819433 | orchestrator | Monday 06 April 2026 05:30:14 +0000 (0:00:00.950) 0:22:44.271 ********** 2026-04-06 05:30:15.819444 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:15.819455 | orchestrator | 2026-04-06 05:30:15.819465 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:30:15.819477 | orchestrator | Monday 06 April 2026 05:30:14 +0000 (0:00:00.140) 0:22:44.412 ********** 2026-04-06 05:30:15.819487 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:15.819498 | orchestrator | 2026-04-06 05:30:15.819509 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:30:15.819521 | orchestrator | Monday 06 April 2026 05:30:14 +0000 (0:00:00.125) 0:22:44.538 ********** 2026-04-06 05:30:15.819532 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:15.819543 | orchestrator | 2026-04-06 05:30:15.819554 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:30:15.819581 | orchestrator | Monday 06 April 2026 05:30:15 +0000 (0:00:00.183) 0:22:44.722 ********** 2026-04-06 05:30:15.819594 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:15.819607 | orchestrator | 2026-04-06 05:30:15.819621 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:30:15.819634 | orchestrator | Monday 06 April 2026 05:30:15 +0000 (0:00:00.127) 0:22:44.849 ********** 2026-04-06 05:30:15.819648 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:15.819661 | orchestrator | 2026-04-06 05:30:15.819674 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:30:15.819687 | orchestrator | Monday 06 April 2026 05:30:15 +0000 (0:00:00.169) 0:22:45.018 ********** 2026-04-06 05:30:15.819700 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:15.819713 | orchestrator | 2026-04-06 05:30:15.819726 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:30:15.819739 | orchestrator | Monday 06 April 2026 05:30:15 +0000 (0:00:00.140) 0:22:45.159 ********** 2026-04-06 05:30:15.819752 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:15.819765 | orchestrator | 2026-04-06 05:30:15.819778 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:30:15.819791 | orchestrator | Monday 06 April 2026 05:30:15 +0000 (0:00:00.175) 0:22:45.335 ********** 2026-04-06 05:30:15.819806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:30:15.819834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'uuids': ['568ee26d-bc52-45e1-a610-bd1b65a33bb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS']}})  2026-04-06 05:30:15.819869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71f71275', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:30:15.819884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33']}})  2026-04-06 05:30:15.819899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:30:15.819919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:30:15.819933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:30:15.819945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:30:15.819964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3', 'dm-uuid-CRYPT-LUKS2-9b11f78520334917a26820c7a917e496-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:30:15.819976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:30:15.819997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'uuids': ['9b11f785-2033-4917-a268-20c7a917e496'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3']}})  2026-04-06 05:30:16.109026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c']}})  2026-04-06 05:30:16.109129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:30:16.109142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d494db8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:30:16.109172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:30:16.109193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:30:16.109201 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS', 'dm-uuid-CRYPT-LUKS2-568ee26dbc5245e1a610bd1b65a33bb1-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:30:16.109208 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:16.109216 | orchestrator | 2026-04-06 05:30:16.109223 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:30:16.109230 | orchestrator | Monday 06 April 2026 05:30:15 +0000 (0:00:00.361) 0:22:45.696 ********** 2026-04-06 05:30:16.109242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.109250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'uuids': ['568ee26d-bc52-45e1-a610-bd1b65a33bb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.109263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71f71275', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.109277 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.231834 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.231929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.231941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.231969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.231977 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3', 'dm-uuid-CRYPT-LUKS2-9b11f78520334917a26820c7a917e496-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.231985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.232007 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'uuids': ['9b11f785-2033-4917-a268-20c7a917e496'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.232019 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.232034 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:16.232049 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d494db8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:25.834441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:25.834559 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:25.834598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS', 'dm-uuid-CRYPT-LUKS2-568ee26dbc5245e1a610bd1b65a33bb1-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:30:25.834614 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.834628 | orchestrator | 2026-04-06 05:30:25.834640 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:30:25.834653 | orchestrator | Monday 06 April 2026 05:30:16 +0000 (0:00:00.404) 0:22:46.100 ********** 2026-04-06 05:30:25.834664 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:25.834676 | orchestrator | 2026-04-06 05:30:25.834688 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:30:25.834699 | orchestrator | Monday 06 April 2026 05:30:16 +0000 (0:00:00.496) 0:22:46.597 ********** 2026-04-06 05:30:25.834710 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:25.834720 | orchestrator | 2026-04-06 05:30:25.834732 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:30:25.834743 | orchestrator | Monday 06 April 2026 05:30:17 +0000 (0:00:00.151) 0:22:46.749 ********** 2026-04-06 05:30:25.834754 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:25.834764 | orchestrator | 2026-04-06 05:30:25.834776 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:30:25.834787 | orchestrator | Monday 06 April 2026 05:30:17 +0000 (0:00:00.484) 0:22:47.233 ********** 2026-04-06 05:30:25.834798 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.834809 | orchestrator | 2026-04-06 05:30:25.834820 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:30:25.834831 | orchestrator | Monday 06 April 2026 05:30:17 +0000 (0:00:00.458) 0:22:47.692 ********** 2026-04-06 05:30:25.834842 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.834853 | orchestrator | 2026-04-06 05:30:25.834864 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:30:25.834875 | orchestrator | Monday 06 April 2026 05:30:18 +0000 (0:00:00.252) 0:22:47.945 ********** 2026-04-06 05:30:25.834885 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.834896 | orchestrator | 2026-04-06 05:30:25.834907 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:30:25.834918 | orchestrator | Monday 06 April 2026 05:30:18 +0000 (0:00:00.139) 0:22:48.084 ********** 2026-04-06 05:30:25.834930 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-06 05:30:25.834941 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-06 05:30:25.834952 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-06 05:30:25.834963 | orchestrator | 2026-04-06 05:30:25.834974 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:30:25.834985 | orchestrator | Monday 06 April 2026 05:30:19 +0000 (0:00:00.693) 0:22:48.777 ********** 2026-04-06 05:30:25.834996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 05:30:25.835015 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 05:30:25.835026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 05:30:25.835037 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.835048 | orchestrator | 2026-04-06 05:30:25.835059 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:30:25.835070 | orchestrator | Monday 06 April 2026 05:30:19 +0000 (0:00:00.170) 0:22:48.947 ********** 2026-04-06 05:30:25.835098 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-06 05:30:25.835110 | orchestrator | 2026-04-06 05:30:25.835122 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:30:25.835135 | orchestrator | Monday 06 April 2026 05:30:19 +0000 (0:00:00.223) 0:22:49.171 ********** 2026-04-06 05:30:25.835145 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.835156 | orchestrator | 2026-04-06 05:30:25.835167 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:30:25.835185 | orchestrator | Monday 06 April 2026 05:30:19 +0000 (0:00:00.139) 0:22:49.311 ********** 2026-04-06 05:30:25.835197 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.835207 | orchestrator | 2026-04-06 05:30:25.835219 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:30:25.835229 | orchestrator | Monday 06 April 2026 05:30:19 +0000 (0:00:00.135) 0:22:49.446 ********** 2026-04-06 05:30:25.835240 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.835251 | orchestrator | 2026-04-06 05:30:25.835262 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:30:25.835273 | orchestrator | Monday 06 April 2026 05:30:19 +0000 (0:00:00.152) 0:22:49.599 ********** 2026-04-06 05:30:25.835284 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:25.835295 | orchestrator | 2026-04-06 05:30:25.835306 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:30:25.835317 | orchestrator | Monday 06 April 2026 05:30:20 +0000 (0:00:00.242) 0:22:49.841 ********** 2026-04-06 05:30:25.835328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:30:25.835339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:30:25.835370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:30:25.835381 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.835392 | orchestrator | 2026-04-06 05:30:25.835403 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:30:25.835414 | orchestrator | Monday 06 April 2026 05:30:20 +0000 (0:00:00.818) 0:22:50.660 ********** 2026-04-06 05:30:25.835425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:30:25.835436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:30:25.835447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:30:25.835458 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.835469 | orchestrator | 2026-04-06 05:30:25.835480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:30:25.835491 | orchestrator | Monday 06 April 2026 05:30:21 +0000 (0:00:00.775) 0:22:51.435 ********** 2026-04-06 05:30:25.835502 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:30:25.835513 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:30:25.835524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:30:25.835535 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:25.835546 | orchestrator | 2026-04-06 05:30:25.835557 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:30:25.835568 | orchestrator | Monday 06 April 2026 05:30:22 +0000 (0:00:01.094) 0:22:52.530 ********** 2026-04-06 05:30:25.835579 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:25.835590 | orchestrator | 2026-04-06 05:30:25.835608 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:30:25.835619 | orchestrator | Monday 06 April 2026 05:30:22 +0000 (0:00:00.158) 0:22:52.688 ********** 2026-04-06 05:30:25.835630 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-06 05:30:25.835641 | orchestrator | 2026-04-06 05:30:25.835652 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:30:25.835663 | orchestrator | Monday 06 April 2026 05:30:23 +0000 (0:00:00.391) 0:22:53.080 ********** 2026-04-06 05:30:25.835674 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:30:25.835685 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:30:25.835696 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:30:25.835707 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-06 05:30:25.835718 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:30:25.835729 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:30:25.835740 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:30:25.835751 | orchestrator | 2026-04-06 05:30:25.835762 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:30:25.835773 | orchestrator | Monday 06 April 2026 05:30:24 +0000 (0:00:00.823) 0:22:53.904 ********** 2026-04-06 05:30:25.835784 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:30:25.835795 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:30:25.835806 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:30:25.835817 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-06 05:30:25.835828 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:30:25.835839 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:30:25.835850 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:30:25.835861 | orchestrator | 2026-04-06 05:30:25.835879 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-04-06 05:30:37.290493 | orchestrator | Monday 06 April 2026 05:30:25 +0000 (0:00:01.643) 0:22:55.548 ********** 2026-04-06 05:30:37.290602 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.290618 | orchestrator | 2026-04-06 05:30:37.290630 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:30:37.290640 | orchestrator | Monday 06 April 2026 05:30:25 +0000 (0:00:00.140) 0:22:55.688 ********** 2026-04-06 05:30:37.290651 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-06 05:30:37.290662 | orchestrator | 2026-04-06 05:30:37.290672 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:30:37.290695 | orchestrator | Monday 06 April 2026 05:30:26 +0000 (0:00:00.210) 0:22:55.899 ********** 2026-04-06 05:30:37.290705 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-06 05:30:37.290715 | orchestrator | 2026-04-06 05:30:37.290725 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:30:37.290735 | orchestrator | Monday 06 April 2026 05:30:26 +0000 (0:00:00.212) 0:22:56.112 ********** 2026-04-06 05:30:37.290745 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.290754 | orchestrator | 2026-04-06 05:30:37.290764 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:30:37.290774 | orchestrator | Monday 06 April 2026 05:30:26 +0000 (0:00:00.136) 0:22:56.248 ********** 2026-04-06 05:30:37.290784 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.290818 | orchestrator | 2026-04-06 05:30:37.290828 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:30:37.290838 | orchestrator | Monday 06 April 2026 05:30:27 +0000 (0:00:00.509) 0:22:56.757 ********** 2026-04-06 05:30:37.290847 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.290857 | orchestrator | 2026-04-06 05:30:37.290867 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:30:37.290876 | orchestrator | Monday 06 April 2026 05:30:27 +0000 (0:00:00.831) 0:22:57.588 ********** 2026-04-06 05:30:37.290886 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.290895 | orchestrator | 2026-04-06 05:30:37.290905 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:30:37.290915 | orchestrator | Monday 06 April 2026 05:30:28 +0000 (0:00:00.571) 0:22:58.160 ********** 2026-04-06 05:30:37.290924 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.290934 | orchestrator | 2026-04-06 05:30:37.290943 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:30:37.290953 | orchestrator | Monday 06 April 2026 05:30:28 +0000 (0:00:00.143) 0:22:58.304 ********** 2026-04-06 05:30:37.290962 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.290972 | orchestrator | 2026-04-06 05:30:37.290982 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:30:37.290991 | orchestrator | Monday 06 April 2026 05:30:28 +0000 (0:00:00.127) 0:22:58.431 ********** 2026-04-06 05:30:37.291001 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291010 | orchestrator | 2026-04-06 05:30:37.291020 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:30:37.291030 | orchestrator | Monday 06 April 2026 05:30:28 +0000 (0:00:00.137) 0:22:58.569 ********** 2026-04-06 05:30:37.291042 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.291053 | orchestrator | 2026-04-06 05:30:37.291064 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:30:37.291075 | orchestrator | Monday 06 April 2026 05:30:29 +0000 (0:00:00.567) 0:22:59.137 ********** 2026-04-06 05:30:37.291086 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.291097 | orchestrator | 2026-04-06 05:30:37.291108 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:30:37.291120 | orchestrator | Monday 06 April 2026 05:30:30 +0000 (0:00:00.579) 0:22:59.717 ********** 2026-04-06 05:30:37.291131 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291142 | orchestrator | 2026-04-06 05:30:37.291153 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:30:37.291164 | orchestrator | Monday 06 April 2026 05:30:30 +0000 (0:00:00.150) 0:22:59.867 ********** 2026-04-06 05:30:37.291175 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291186 | orchestrator | 2026-04-06 05:30:37.291198 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:30:37.291210 | orchestrator | Monday 06 April 2026 05:30:30 +0000 (0:00:00.135) 0:23:00.002 ********** 2026-04-06 05:30:37.291222 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.291233 | orchestrator | 2026-04-06 05:30:37.291244 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:30:37.291255 | orchestrator | Monday 06 April 2026 05:30:30 +0000 (0:00:00.155) 0:23:00.158 ********** 2026-04-06 05:30:37.291267 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.291278 | orchestrator | 2026-04-06 05:30:37.291290 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:30:37.291301 | orchestrator | Monday 06 April 2026 05:30:30 +0000 (0:00:00.153) 0:23:00.312 ********** 2026-04-06 05:30:37.291331 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.291342 | orchestrator | 2026-04-06 05:30:37.291353 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:30:37.291364 | orchestrator | Monday 06 April 2026 05:30:30 +0000 (0:00:00.163) 0:23:00.476 ********** 2026-04-06 05:30:37.291375 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291386 | orchestrator | 2026-04-06 05:30:37.291405 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:30:37.291415 | orchestrator | Monday 06 April 2026 05:30:30 +0000 (0:00:00.131) 0:23:00.607 ********** 2026-04-06 05:30:37.291425 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291434 | orchestrator | 2026-04-06 05:30:37.291444 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:30:37.291454 | orchestrator | Monday 06 April 2026 05:30:31 +0000 (0:00:00.463) 0:23:01.070 ********** 2026-04-06 05:30:37.291463 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291473 | orchestrator | 2026-04-06 05:30:37.291498 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:30:37.291508 | orchestrator | Monday 06 April 2026 05:30:31 +0000 (0:00:00.136) 0:23:01.207 ********** 2026-04-06 05:30:37.291518 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.291527 | orchestrator | 2026-04-06 05:30:37.291537 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:30:37.291546 | orchestrator | Monday 06 April 2026 05:30:31 +0000 (0:00:00.170) 0:23:01.378 ********** 2026-04-06 05:30:37.291556 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.291565 | orchestrator | 2026-04-06 05:30:37.291575 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:30:37.291589 | orchestrator | Monday 06 April 2026 05:30:31 +0000 (0:00:00.241) 0:23:01.619 ********** 2026-04-06 05:30:37.291599 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291609 | orchestrator | 2026-04-06 05:30:37.291619 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:30:37.291628 | orchestrator | Monday 06 April 2026 05:30:32 +0000 (0:00:00.127) 0:23:01.747 ********** 2026-04-06 05:30:37.291638 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291647 | orchestrator | 2026-04-06 05:30:37.291657 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:30:37.291666 | orchestrator | Monday 06 April 2026 05:30:32 +0000 (0:00:00.126) 0:23:01.873 ********** 2026-04-06 05:30:37.291676 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291685 | orchestrator | 2026-04-06 05:30:37.291695 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:30:37.291704 | orchestrator | Monday 06 April 2026 05:30:32 +0000 (0:00:00.137) 0:23:02.011 ********** 2026-04-06 05:30:37.291714 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291723 | orchestrator | 2026-04-06 05:30:37.291733 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:30:37.291742 | orchestrator | Monday 06 April 2026 05:30:32 +0000 (0:00:00.143) 0:23:02.155 ********** 2026-04-06 05:30:37.291752 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291762 | orchestrator | 2026-04-06 05:30:37.291771 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:30:37.291781 | orchestrator | Monday 06 April 2026 05:30:32 +0000 (0:00:00.137) 0:23:02.292 ********** 2026-04-06 05:30:37.291790 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291800 | orchestrator | 2026-04-06 05:30:37.291809 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:30:37.291819 | orchestrator | Monday 06 April 2026 05:30:32 +0000 (0:00:00.135) 0:23:02.428 ********** 2026-04-06 05:30:37.291829 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291838 | orchestrator | 2026-04-06 05:30:37.291848 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:30:37.291858 | orchestrator | Monday 06 April 2026 05:30:32 +0000 (0:00:00.135) 0:23:02.563 ********** 2026-04-06 05:30:37.291867 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291877 | orchestrator | 2026-04-06 05:30:37.291886 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:30:37.291896 | orchestrator | Monday 06 April 2026 05:30:32 +0000 (0:00:00.124) 0:23:02.688 ********** 2026-04-06 05:30:37.291905 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291921 | orchestrator | 2026-04-06 05:30:37.291934 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:30:37.291949 | orchestrator | Monday 06 April 2026 05:30:33 +0000 (0:00:00.165) 0:23:02.853 ********** 2026-04-06 05:30:37.291964 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.291980 | orchestrator | 2026-04-06 05:30:37.291996 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:30:37.292012 | orchestrator | Monday 06 April 2026 05:30:33 +0000 (0:00:00.436) 0:23:03.289 ********** 2026-04-06 05:30:37.292028 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.292042 | orchestrator | 2026-04-06 05:30:37.292051 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:30:37.292061 | orchestrator | Monday 06 April 2026 05:30:33 +0000 (0:00:00.138) 0:23:03.427 ********** 2026-04-06 05:30:37.292070 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.292092 | orchestrator | 2026-04-06 05:30:37.292101 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:30:37.292111 | orchestrator | Monday 06 April 2026 05:30:33 +0000 (0:00:00.221) 0:23:03.649 ********** 2026-04-06 05:30:37.292121 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.292130 | orchestrator | 2026-04-06 05:30:37.292139 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:30:37.292149 | orchestrator | Monday 06 April 2026 05:30:34 +0000 (0:00:00.927) 0:23:04.577 ********** 2026-04-06 05:30:37.292159 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:37.292168 | orchestrator | 2026-04-06 05:30:37.292178 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:30:37.292188 | orchestrator | Monday 06 April 2026 05:30:36 +0000 (0:00:01.186) 0:23:05.763 ********** 2026-04-06 05:30:37.292197 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-06 05:30:37.292207 | orchestrator | 2026-04-06 05:30:37.292216 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:30:37.292226 | orchestrator | Monday 06 April 2026 05:30:36 +0000 (0:00:00.218) 0:23:05.981 ********** 2026-04-06 05:30:37.292235 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.292245 | orchestrator | 2026-04-06 05:30:37.292254 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:30:37.292264 | orchestrator | Monday 06 April 2026 05:30:36 +0000 (0:00:00.163) 0:23:06.144 ********** 2026-04-06 05:30:37.292273 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:37.292283 | orchestrator | 2026-04-06 05:30:37.292292 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:30:37.292302 | orchestrator | Monday 06 April 2026 05:30:36 +0000 (0:00:00.146) 0:23:06.291 ********** 2026-04-06 05:30:37.292351 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:30:37.292379 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:30:52.309636 | orchestrator | 2026-04-06 05:30:52.309757 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:30:52.309773 | orchestrator | Monday 06 April 2026 05:30:37 +0000 (0:00:00.817) 0:23:07.108 ********** 2026-04-06 05:30:52.309785 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:52.309798 | orchestrator | 2026-04-06 05:30:52.309810 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:30:52.309821 | orchestrator | Monday 06 April 2026 05:30:37 +0000 (0:00:00.457) 0:23:07.566 ********** 2026-04-06 05:30:52.309833 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.309845 | orchestrator | 2026-04-06 05:30:52.309872 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:30:52.309884 | orchestrator | Monday 06 April 2026 05:30:38 +0000 (0:00:00.229) 0:23:07.796 ********** 2026-04-06 05:30:52.309896 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.309907 | orchestrator | 2026-04-06 05:30:52.309918 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:30:52.309954 | orchestrator | Monday 06 April 2026 05:30:38 +0000 (0:00:00.468) 0:23:08.264 ********** 2026-04-06 05:30:52.309966 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.309977 | orchestrator | 2026-04-06 05:30:52.309988 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:30:52.309998 | orchestrator | Monday 06 April 2026 05:30:38 +0000 (0:00:00.121) 0:23:08.386 ********** 2026-04-06 05:30:52.310010 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-06 05:30:52.310086 | orchestrator | 2026-04-06 05:30:52.310098 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:30:52.310108 | orchestrator | Monday 06 April 2026 05:30:38 +0000 (0:00:00.225) 0:23:08.611 ********** 2026-04-06 05:30:52.310119 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:52.310130 | orchestrator | 2026-04-06 05:30:52.310140 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:30:52.310152 | orchestrator | Monday 06 April 2026 05:30:39 +0000 (0:00:00.683) 0:23:09.295 ********** 2026-04-06 05:30:52.310162 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:30:52.310173 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:30:52.310184 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:30:52.310195 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310205 | orchestrator | 2026-04-06 05:30:52.310216 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:30:52.310227 | orchestrator | Monday 06 April 2026 05:30:39 +0000 (0:00:00.143) 0:23:09.438 ********** 2026-04-06 05:30:52.310237 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310248 | orchestrator | 2026-04-06 05:30:52.310333 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:30:52.310345 | orchestrator | Monday 06 April 2026 05:30:39 +0000 (0:00:00.149) 0:23:09.588 ********** 2026-04-06 05:30:52.310356 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310367 | orchestrator | 2026-04-06 05:30:52.310378 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:30:52.310388 | orchestrator | Monday 06 April 2026 05:30:40 +0000 (0:00:00.162) 0:23:09.750 ********** 2026-04-06 05:30:52.310399 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310409 | orchestrator | 2026-04-06 05:30:52.310421 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:30:52.310432 | orchestrator | Monday 06 April 2026 05:30:40 +0000 (0:00:00.142) 0:23:09.892 ********** 2026-04-06 05:30:52.310442 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310453 | orchestrator | 2026-04-06 05:30:52.310464 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:30:52.310474 | orchestrator | Monday 06 April 2026 05:30:40 +0000 (0:00:00.160) 0:23:10.053 ********** 2026-04-06 05:30:52.310485 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310495 | orchestrator | 2026-04-06 05:30:52.310506 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:30:52.310517 | orchestrator | Monday 06 April 2026 05:30:40 +0000 (0:00:00.146) 0:23:10.199 ********** 2026-04-06 05:30:52.310527 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:52.310538 | orchestrator | 2026-04-06 05:30:52.310549 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:30:52.310559 | orchestrator | Monday 06 April 2026 05:30:41 +0000 (0:00:01.469) 0:23:11.668 ********** 2026-04-06 05:30:52.310570 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:52.310581 | orchestrator | 2026-04-06 05:30:52.310592 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:30:52.310602 | orchestrator | Monday 06 April 2026 05:30:42 +0000 (0:00:00.161) 0:23:11.830 ********** 2026-04-06 05:30:52.310613 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-06 05:30:52.310634 | orchestrator | 2026-04-06 05:30:52.310644 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:30:52.310655 | orchestrator | Monday 06 April 2026 05:30:42 +0000 (0:00:00.543) 0:23:12.373 ********** 2026-04-06 05:30:52.310665 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310676 | orchestrator | 2026-04-06 05:30:52.310687 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:30:52.310698 | orchestrator | Monday 06 April 2026 05:30:42 +0000 (0:00:00.139) 0:23:12.513 ********** 2026-04-06 05:30:52.310708 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310719 | orchestrator | 2026-04-06 05:30:52.310729 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:30:52.310740 | orchestrator | Monday 06 April 2026 05:30:42 +0000 (0:00:00.146) 0:23:12.660 ********** 2026-04-06 05:30:52.310750 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310761 | orchestrator | 2026-04-06 05:30:52.310772 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:30:52.310801 | orchestrator | Monday 06 April 2026 05:30:43 +0000 (0:00:00.180) 0:23:12.840 ********** 2026-04-06 05:30:52.310812 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310823 | orchestrator | 2026-04-06 05:30:52.310833 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:30:52.310844 | orchestrator | Monday 06 April 2026 05:30:43 +0000 (0:00:00.159) 0:23:13.000 ********** 2026-04-06 05:30:52.310854 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310865 | orchestrator | 2026-04-06 05:30:52.310876 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:30:52.310893 | orchestrator | Monday 06 April 2026 05:30:43 +0000 (0:00:00.164) 0:23:13.164 ********** 2026-04-06 05:30:52.310905 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310915 | orchestrator | 2026-04-06 05:30:52.310926 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:30:52.310937 | orchestrator | Monday 06 April 2026 05:30:43 +0000 (0:00:00.155) 0:23:13.320 ********** 2026-04-06 05:30:52.310947 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.310958 | orchestrator | 2026-04-06 05:30:52.310969 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:30:52.310980 | orchestrator | Monday 06 April 2026 05:30:43 +0000 (0:00:00.149) 0:23:13.470 ********** 2026-04-06 05:30:52.310990 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.311001 | orchestrator | 2026-04-06 05:30:52.311012 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:30:52.311022 | orchestrator | Monday 06 April 2026 05:30:43 +0000 (0:00:00.150) 0:23:13.621 ********** 2026-04-06 05:30:52.311033 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:30:52.311044 | orchestrator | 2026-04-06 05:30:52.311054 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:30:52.311065 | orchestrator | Monday 06 April 2026 05:30:44 +0000 (0:00:00.233) 0:23:13.854 ********** 2026-04-06 05:30:52.311076 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-06 05:30:52.311086 | orchestrator | 2026-04-06 05:30:52.311097 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:30:52.311108 | orchestrator | Monday 06 April 2026 05:30:44 +0000 (0:00:00.500) 0:23:14.355 ********** 2026-04-06 05:30:52.311118 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-06 05:30:52.311129 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-06 05:30:52.311140 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-06 05:30:52.311151 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-06 05:30:52.311161 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-06 05:30:52.311172 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-06 05:30:52.311190 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-06 05:30:52.311200 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:30:52.311211 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:30:52.311222 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:30:52.311232 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:30:52.311243 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:30:52.311275 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:30:52.311286 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:30:52.311297 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-06 05:30:52.311308 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-06 05:30:52.311318 | orchestrator | 2026-04-06 05:30:52.311329 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:30:52.311340 | orchestrator | Monday 06 April 2026 05:30:50 +0000 (0:00:05.490) 0:23:19.845 ********** 2026-04-06 05:30:52.311350 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-06 05:30:52.311361 | orchestrator | 2026-04-06 05:30:52.311372 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-06 05:30:52.311383 | orchestrator | Monday 06 April 2026 05:30:50 +0000 (0:00:00.230) 0:23:20.075 ********** 2026-04-06 05:30:52.311393 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:30:52.311405 | orchestrator | 2026-04-06 05:30:52.311416 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-06 05:30:52.311427 | orchestrator | Monday 06 April 2026 05:30:50 +0000 (0:00:00.494) 0:23:20.569 ********** 2026-04-06 05:30:52.311438 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:30:52.311448 | orchestrator | 2026-04-06 05:30:52.311546 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:30:52.311560 | orchestrator | Monday 06 April 2026 05:30:51 +0000 (0:00:01.027) 0:23:21.597 ********** 2026-04-06 05:30:52.311571 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.311582 | orchestrator | 2026-04-06 05:30:52.311593 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:30:52.311603 | orchestrator | Monday 06 April 2026 05:30:52 +0000 (0:00:00.160) 0:23:21.757 ********** 2026-04-06 05:30:52.311614 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.311625 | orchestrator | 2026-04-06 05:30:52.311635 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:30:52.311646 | orchestrator | Monday 06 April 2026 05:30:52 +0000 (0:00:00.139) 0:23:21.897 ********** 2026-04-06 05:30:52.311657 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:30:52.311667 | orchestrator | 2026-04-06 05:30:52.311678 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:30:52.311697 | orchestrator | Monday 06 April 2026 05:30:52 +0000 (0:00:00.118) 0:23:22.016 ********** 2026-04-06 05:31:12.182793 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.182872 | orchestrator | 2026-04-06 05:31:12.182879 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:31:12.182884 | orchestrator | Monday 06 April 2026 05:30:52 +0000 (0:00:00.143) 0:23:22.160 ********** 2026-04-06 05:31:12.182888 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.182892 | orchestrator | 2026-04-06 05:31:12.182896 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:31:12.182901 | orchestrator | Monday 06 April 2026 05:30:52 +0000 (0:00:00.134) 0:23:22.295 ********** 2026-04-06 05:31:12.182905 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.182923 | orchestrator | 2026-04-06 05:31:12.182927 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:31:12.182932 | orchestrator | Monday 06 April 2026 05:30:52 +0000 (0:00:00.131) 0:23:22.426 ********** 2026-04-06 05:31:12.182935 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.182939 | orchestrator | 2026-04-06 05:31:12.182943 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:31:12.182947 | orchestrator | Monday 06 April 2026 05:30:52 +0000 (0:00:00.128) 0:23:22.554 ********** 2026-04-06 05:31:12.182951 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.182954 | orchestrator | 2026-04-06 05:31:12.182958 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:31:12.182962 | orchestrator | Monday 06 April 2026 05:30:53 +0000 (0:00:00.497) 0:23:23.052 ********** 2026-04-06 05:31:12.182966 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.182970 | orchestrator | 2026-04-06 05:31:12.182973 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:31:12.182977 | orchestrator | Monday 06 April 2026 05:30:53 +0000 (0:00:00.124) 0:23:23.177 ********** 2026-04-06 05:31:12.182981 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.182985 | orchestrator | 2026-04-06 05:31:12.182988 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:31:12.182992 | orchestrator | Monday 06 April 2026 05:30:53 +0000 (0:00:00.157) 0:23:23.334 ********** 2026-04-06 05:31:12.182996 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183000 | orchestrator | 2026-04-06 05:31:12.183003 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:31:12.183007 | orchestrator | Monday 06 April 2026 05:30:53 +0000 (0:00:00.153) 0:23:23.487 ********** 2026-04-06 05:31:12.183011 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:31:12.183015 | orchestrator | 2026-04-06 05:31:12.183019 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:31:12.183022 | orchestrator | Monday 06 April 2026 05:30:57 +0000 (0:00:03.235) 0:23:26.722 ********** 2026-04-06 05:31:12.183026 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:31:12.183031 | orchestrator | 2026-04-06 05:31:12.183035 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:31:12.183039 | orchestrator | Monday 06 April 2026 05:30:57 +0000 (0:00:00.194) 0:23:26.917 ********** 2026-04-06 05:31:12.183045 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-06 05:31:12.183051 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-06 05:31:12.183056 | orchestrator | 2026-04-06 05:31:12.183059 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:31:12.183063 | orchestrator | Monday 06 April 2026 05:31:00 +0000 (0:00:03.738) 0:23:30.655 ********** 2026-04-06 05:31:12.183067 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183071 | orchestrator | 2026-04-06 05:31:12.183074 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:31:12.183078 | orchestrator | Monday 06 April 2026 05:31:01 +0000 (0:00:00.140) 0:23:30.796 ********** 2026-04-06 05:31:12.183082 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183085 | orchestrator | 2026-04-06 05:31:12.183089 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:31:12.183097 | orchestrator | Monday 06 April 2026 05:31:01 +0000 (0:00:00.163) 0:23:30.959 ********** 2026-04-06 05:31:12.183100 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183104 | orchestrator | 2026-04-06 05:31:12.183108 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:31:12.183112 | orchestrator | Monday 06 April 2026 05:31:01 +0000 (0:00:00.159) 0:23:31.119 ********** 2026-04-06 05:31:12.183115 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183119 | orchestrator | 2026-04-06 05:31:12.183123 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:31:12.183156 | orchestrator | Monday 06 April 2026 05:31:01 +0000 (0:00:00.162) 0:23:31.281 ********** 2026-04-06 05:31:12.183161 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183165 | orchestrator | 2026-04-06 05:31:12.183169 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:31:12.183182 | orchestrator | Monday 06 April 2026 05:31:01 +0000 (0:00:00.158) 0:23:31.440 ********** 2026-04-06 05:31:12.183209 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:12.183214 | orchestrator | 2026-04-06 05:31:12.183218 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:31:12.183221 | orchestrator | Monday 06 April 2026 05:31:02 +0000 (0:00:00.308) 0:23:31.749 ********** 2026-04-06 05:31:12.183225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:31:12.183229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:31:12.183236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:31:12.183240 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183243 | orchestrator | 2026-04-06 05:31:12.183247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:31:12.183251 | orchestrator | Monday 06 April 2026 05:31:02 +0000 (0:00:00.774) 0:23:32.523 ********** 2026-04-06 05:31:12.183254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:31:12.183258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:31:12.183262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:31:12.183266 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183269 | orchestrator | 2026-04-06 05:31:12.183273 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:31:12.183277 | orchestrator | Monday 06 April 2026 05:31:03 +0000 (0:00:01.105) 0:23:33.629 ********** 2026-04-06 05:31:12.183281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:31:12.183284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:31:12.183288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:31:12.183292 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183295 | orchestrator | 2026-04-06 05:31:12.183299 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:31:12.183303 | orchestrator | Monday 06 April 2026 05:31:04 +0000 (0:00:00.427) 0:23:34.057 ********** 2026-04-06 05:31:12.183307 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:12.183310 | orchestrator | 2026-04-06 05:31:12.183314 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:31:12.183318 | orchestrator | Monday 06 April 2026 05:31:04 +0000 (0:00:00.183) 0:23:34.240 ********** 2026-04-06 05:31:12.183322 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-06 05:31:12.183325 | orchestrator | 2026-04-06 05:31:12.183329 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:31:12.183333 | orchestrator | Monday 06 April 2026 05:31:04 +0000 (0:00:00.434) 0:23:34.675 ********** 2026-04-06 05:31:12.183337 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:12.183340 | orchestrator | 2026-04-06 05:31:12.183344 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-06 05:31:12.183352 | orchestrator | Monday 06 April 2026 05:31:05 +0000 (0:00:00.797) 0:23:35.472 ********** 2026-04-06 05:31:12.183356 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183359 | orchestrator | 2026-04-06 05:31:12.183364 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-06 05:31:12.183368 | orchestrator | Monday 06 April 2026 05:31:05 +0000 (0:00:00.148) 0:23:35.621 ********** 2026-04-06 05:31:12.183373 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3 2026-04-06 05:31:12.183377 | orchestrator | 2026-04-06 05:31:12.183382 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-06 05:31:12.183386 | orchestrator | Monday 06 April 2026 05:31:06 +0000 (0:00:00.569) 0:23:36.190 ********** 2026-04-06 05:31:12.183391 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-06 05:31:12.183395 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-06 05:31:12.183399 | orchestrator | 2026-04-06 05:31:12.183403 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-06 05:31:12.183408 | orchestrator | Monday 06 April 2026 05:31:07 +0000 (0:00:00.831) 0:23:37.021 ********** 2026-04-06 05:31:12.183412 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:31:12.183417 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 05:31:12.183421 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:31:12.183425 | orchestrator | 2026-04-06 05:31:12.183429 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:31:12.183434 | orchestrator | Monday 06 April 2026 05:31:09 +0000 (0:00:02.266) 0:23:39.288 ********** 2026-04-06 05:31:12.183438 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-06 05:31:12.183443 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 05:31:12.183447 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:12.183451 | orchestrator | 2026-04-06 05:31:12.183456 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-06 05:31:12.183460 | orchestrator | Monday 06 April 2026 05:31:10 +0000 (0:00:00.964) 0:23:40.253 ********** 2026-04-06 05:31:12.183464 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:12.183468 | orchestrator | 2026-04-06 05:31:12.183473 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-06 05:31:12.183477 | orchestrator | Monday 06 April 2026 05:31:11 +0000 (0:00:00.796) 0:23:41.049 ********** 2026-04-06 05:31:12.183482 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:12.183486 | orchestrator | 2026-04-06 05:31:12.183490 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-06 05:31:12.183495 | orchestrator | Monday 06 April 2026 05:31:11 +0000 (0:00:00.125) 0:23:41.174 ********** 2026-04-06 05:31:12.183499 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3 2026-04-06 05:31:12.183504 | orchestrator | 2026-04-06 05:31:12.183508 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-06 05:31:12.183513 | orchestrator | Monday 06 April 2026 05:31:12 +0000 (0:00:00.597) 0:23:41.772 ********** 2026-04-06 05:31:12.183519 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3 2026-04-06 05:31:32.219165 | orchestrator | 2026-04-06 05:31:32.219265 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-06 05:31:32.219282 | orchestrator | Monday 06 April 2026 05:31:12 +0000 (0:00:00.560) 0:23:42.333 ********** 2026-04-06 05:31:32.219294 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:32.219306 | orchestrator | 2026-04-06 05:31:32.219318 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-06 05:31:32.219342 | orchestrator | Monday 06 April 2026 05:31:13 +0000 (0:00:01.044) 0:23:43.378 ********** 2026-04-06 05:31:32.219354 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:32.219365 | orchestrator | 2026-04-06 05:31:32.219377 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-06 05:31:32.219408 | orchestrator | Monday 06 April 2026 05:31:14 +0000 (0:00:00.919) 0:23:44.298 ********** 2026-04-06 05:31:32.219419 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:32.219430 | orchestrator | 2026-04-06 05:31:32.219441 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-06 05:31:32.219452 | orchestrator | Monday 06 April 2026 05:31:15 +0000 (0:00:01.187) 0:23:45.485 ********** 2026-04-06 05:31:32.219463 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:32.219474 | orchestrator | 2026-04-06 05:31:32.219485 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-06 05:31:32.219496 | orchestrator | Monday 06 April 2026 05:31:17 +0000 (0:00:01.239) 0:23:46.725 ********** 2026-04-06 05:31:32.219507 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:32.219517 | orchestrator | 2026-04-06 05:31:32.219528 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-04-06 05:31:32.219539 | orchestrator | Monday 06 April 2026 05:31:17 +0000 (0:00:00.675) 0:23:47.400 ********** 2026-04-06 05:31:32.219550 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:31:32.219561 | orchestrator | 2026-04-06 05:31:32.219572 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-04-06 05:31:32.219583 | orchestrator | Monday 06 April 2026 05:31:17 +0000 (0:00:00.137) 0:23:47.537 ********** 2026-04-06 05:31:32.219594 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:31:32.219605 | orchestrator | 2026-04-06 05:31:32.219616 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-04-06 05:31:32.219627 | orchestrator | 2026-04-06 05:31:32.219638 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:31:32.219649 | orchestrator | Monday 06 April 2026 05:31:24 +0000 (0:00:06.232) 0:23:53.770 ********** 2026-04-06 05:31:32.219659 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-5 2026-04-06 05:31:32.219671 | orchestrator | 2026-04-06 05:31:32.219682 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:31:32.219692 | orchestrator | Monday 06 April 2026 05:31:24 +0000 (0:00:00.404) 0:23:54.174 ********** 2026-04-06 05:31:32.219703 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:32.219716 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:32.219729 | orchestrator | 2026-04-06 05:31:32.219742 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:31:32.219754 | orchestrator | Monday 06 April 2026 05:31:25 +0000 (0:00:00.556) 0:23:54.730 ********** 2026-04-06 05:31:32.219767 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:32.219780 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:32.219792 | orchestrator | 2026-04-06 05:31:32.219805 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:31:32.219816 | orchestrator | Monday 06 April 2026 05:31:25 +0000 (0:00:00.218) 0:23:54.948 ********** 2026-04-06 05:31:32.219827 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:32.219838 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:32.219849 | orchestrator | 2026-04-06 05:31:32.219860 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:31:32.219871 | orchestrator | Monday 06 April 2026 05:31:25 +0000 (0:00:00.542) 0:23:55.491 ********** 2026-04-06 05:31:32.219882 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:32.219893 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:32.219904 | orchestrator | 2026-04-06 05:31:32.219914 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:31:32.219925 | orchestrator | Monday 06 April 2026 05:31:26 +0000 (0:00:00.531) 0:23:56.022 ********** 2026-04-06 05:31:32.219936 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:32.219947 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:32.219958 | orchestrator | 2026-04-06 05:31:32.219969 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:31:32.219981 | orchestrator | Monday 06 April 2026 05:31:26 +0000 (0:00:00.229) 0:23:56.251 ********** 2026-04-06 05:31:32.220000 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:32.220011 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:32.220022 | orchestrator | 2026-04-06 05:31:32.220033 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:31:32.220044 | orchestrator | Monday 06 April 2026 05:31:26 +0000 (0:00:00.268) 0:23:56.520 ********** 2026-04-06 05:31:32.220055 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:32.220066 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:32.220077 | orchestrator | 2026-04-06 05:31:32.220088 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:31:32.220099 | orchestrator | Monday 06 April 2026 05:31:27 +0000 (0:00:00.260) 0:23:56.780 ********** 2026-04-06 05:31:32.220110 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:32.220138 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:32.220150 | orchestrator | 2026-04-06 05:31:32.220161 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:31:32.220172 | orchestrator | Monday 06 April 2026 05:31:27 +0000 (0:00:00.226) 0:23:57.007 ********** 2026-04-06 05:31:32.220183 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:31:32.220194 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:31:32.220205 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:31:32.220216 | orchestrator | 2026-04-06 05:31:32.220227 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:31:32.220254 | orchestrator | Monday 06 April 2026 05:31:28 +0000 (0:00:01.013) 0:23:58.021 ********** 2026-04-06 05:31:32.220265 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:32.220276 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:32.220287 | orchestrator | 2026-04-06 05:31:32.220298 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:31:32.220309 | orchestrator | Monday 06 April 2026 05:31:28 +0000 (0:00:00.358) 0:23:58.379 ********** 2026-04-06 05:31:32.220325 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:31:32.220336 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:31:32.220347 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:31:32.220358 | orchestrator | 2026-04-06 05:31:32.220369 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:31:32.220380 | orchestrator | Monday 06 April 2026 05:31:31 +0000 (0:00:02.373) 0:24:00.753 ********** 2026-04-06 05:31:32.220391 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-06 05:31:32.220402 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-06 05:31:32.220413 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-06 05:31:32.220424 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:32.220435 | orchestrator | 2026-04-06 05:31:32.220446 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:31:32.220456 | orchestrator | Monday 06 April 2026 05:31:31 +0000 (0:00:00.374) 0:24:01.127 ********** 2026-04-06 05:31:32.220468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:31:32.220481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:31:32.220493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:31:32.220511 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:32.220522 | orchestrator | 2026-04-06 05:31:32.220533 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:31:32.220544 | orchestrator | Monday 06 April 2026 05:31:31 +0000 (0:00:00.564) 0:24:01.692 ********** 2026-04-06 05:31:32.220557 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:32.220570 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:32.220581 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:32.220592 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:32.220603 | orchestrator | 2026-04-06 05:31:32.220614 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:31:32.220625 | orchestrator | Monday 06 April 2026 05:31:32 +0000 (0:00:00.165) 0:24:01.857 ********** 2026-04-06 05:31:32.220646 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:31:29.559900', 'end': '2026-04-06 05:31:29.604213', 'delta': '0:00:00.044313', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:31:38.164610 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:31:30.106824', 'end': '2026-04-06 05:31:30.157344', 'delta': '0:00:00.050520', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:31:38.164717 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:31:30.650022', 'end': '2026-04-06 05:31:30.692256', 'delta': '0:00:00.042234', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:31:38.164755 | orchestrator | 2026-04-06 05:31:38.164770 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:31:38.164782 | orchestrator | Monday 06 April 2026 05:31:32 +0000 (0:00:00.188) 0:24:02.045 ********** 2026-04-06 05:31:38.164793 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:38.164805 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:38.164816 | orchestrator | 2026-04-06 05:31:38.164827 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:31:38.164838 | orchestrator | Monday 06 April 2026 05:31:32 +0000 (0:00:00.337) 0:24:02.383 ********** 2026-04-06 05:31:38.164849 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:38.164860 | orchestrator | 2026-04-06 05:31:38.164871 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:31:38.164881 | orchestrator | Monday 06 April 2026 05:31:32 +0000 (0:00:00.224) 0:24:02.608 ********** 2026-04-06 05:31:38.164892 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:38.164903 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:38.164913 | orchestrator | 2026-04-06 05:31:38.164924 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:31:38.164934 | orchestrator | Monday 06 April 2026 05:31:33 +0000 (0:00:00.277) 0:24:02.885 ********** 2026-04-06 05:31:38.164945 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:31:38.164956 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:31:38.164966 | orchestrator | 2026-04-06 05:31:38.164977 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:31:38.164987 | orchestrator | Monday 06 April 2026 05:31:35 +0000 (0:00:02.047) 0:24:04.932 ********** 2026-04-06 05:31:38.164998 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:38.165008 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:38.165019 | orchestrator | 2026-04-06 05:31:38.165030 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:31:38.165041 | orchestrator | Monday 06 April 2026 05:31:35 +0000 (0:00:00.218) 0:24:05.151 ********** 2026-04-06 05:31:38.165051 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:38.165062 | orchestrator | 2026-04-06 05:31:38.165073 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:31:38.165083 | orchestrator | Monday 06 April 2026 05:31:35 +0000 (0:00:00.316) 0:24:05.467 ********** 2026-04-06 05:31:38.165094 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:38.165126 | orchestrator | 2026-04-06 05:31:38.165139 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:31:38.165153 | orchestrator | Monday 06 April 2026 05:31:35 +0000 (0:00:00.217) 0:24:05.684 ********** 2026-04-06 05:31:38.165166 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:38.165178 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:38.165190 | orchestrator | 2026-04-06 05:31:38.165203 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:31:38.165216 | orchestrator | Monday 06 April 2026 05:31:36 +0000 (0:00:00.225) 0:24:05.910 ********** 2026-04-06 05:31:38.165228 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:38.165240 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:38.165253 | orchestrator | 2026-04-06 05:31:38.165266 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:31:38.165279 | orchestrator | Monday 06 April 2026 05:31:36 +0000 (0:00:00.203) 0:24:06.114 ********** 2026-04-06 05:31:38.165292 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:38.165304 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:38.165317 | orchestrator | 2026-04-06 05:31:38.165329 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:31:38.165351 | orchestrator | Monday 06 April 2026 05:31:36 +0000 (0:00:00.281) 0:24:06.395 ********** 2026-04-06 05:31:38.165364 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:38.165377 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:38.165389 | orchestrator | 2026-04-06 05:31:38.165419 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:31:38.165432 | orchestrator | Monday 06 April 2026 05:31:36 +0000 (0:00:00.203) 0:24:06.598 ********** 2026-04-06 05:31:38.165445 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:38.165465 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:38.165479 | orchestrator | 2026-04-06 05:31:38.165491 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:31:38.165502 | orchestrator | Monday 06 April 2026 05:31:37 +0000 (0:00:00.248) 0:24:06.847 ********** 2026-04-06 05:31:38.165513 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:38.165523 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:38.165534 | orchestrator | 2026-04-06 05:31:38.165545 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:31:38.165556 | orchestrator | Monday 06 April 2026 05:31:37 +0000 (0:00:00.527) 0:24:07.374 ********** 2026-04-06 05:31:38.165567 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:38.165577 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:38.165588 | orchestrator | 2026-04-06 05:31:38.165599 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:31:38.165609 | orchestrator | Monday 06 April 2026 05:31:37 +0000 (0:00:00.285) 0:24:07.659 ********** 2026-04-06 05:31:38.165622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.165637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'uuids': ['83378823-14d2-4928-9007-67488abc99a7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp']}})  2026-04-06 05:31:38.165649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a868051', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:31:38.165663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3']}})  2026-04-06 05:31:38.165683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.165707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.260705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:31:38.260773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.260781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO', 'dm-uuid-CRYPT-LUKS2-dd6ed06a0d554d6181a429bf5c5222d7-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:31:38.260785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.260790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'uuids': ['dd6ed06a-0d55-4d61-81a4-29bf5c5222d7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO']}})  2026-04-06 05:31:38.260796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a']}})  2026-04-06 05:31:38.260817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.260840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.260847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '40f67feb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:31:38.260854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'uuids': ['22ded8c8-9142-404c-a572-856e0a8f4fba'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP']}})  2026-04-06 05:31:38.260862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.260873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd180ec14', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:31:38.379064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.379224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447']}})  2026-04-06 05:31:38.379247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp', 'dm-uuid-CRYPT-LUKS2-8337882314d24928900767488abc99a7-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:31:38.379264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.379280 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:38.379297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.379340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:31:38.379357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.379413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt', 'dm-uuid-CRYPT-LUKS2-0cb92a9095ac4932ba9885def0a3f871-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:31:38.379431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.379447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'uuids': ['0cb92a90-95ac-4932-ba98-85def0a3f871'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt']}})  2026-04-06 05:31:38.379464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742']}})  2026-04-06 05:31:38.379479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.379527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd99642af', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:31:38.729822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.729917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:31:38.729931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP', 'dm-uuid-CRYPT-LUKS2-22ded8c89142404ca572856e0a8f4fba-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:31:38.729977 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:38.729991 | orchestrator | 2026-04-06 05:31:38.730003 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:31:38.730015 | orchestrator | Monday 06 April 2026 05:31:38 +0000 (0:00:00.579) 0:24:08.239 ********** 2026-04-06 05:31:38.730091 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.730156 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'uuids': ['83378823-14d2-4928-9007-67488abc99a7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.730184 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a868051', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.730216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.730231 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.730253 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.730265 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.730283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.730295 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.730314 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO', 'dm-uuid-CRYPT-LUKS2-dd6ed06a0d554d6181a429bf5c5222d7-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.797220 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'uuids': ['22ded8c8-9142-404c-a572-856e0a8f4fba'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.797343 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.797362 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd180ec14', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.797388 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'uuids': ['dd6ed06a-0d55-4d61-81a4-29bf5c5222d7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.797420 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.797450 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.797464 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.797477 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.797494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.797518 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '40f67feb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890212 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890313 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890344 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890357 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890369 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt', 'dm-uuid-CRYPT-LUKS2-0cb92a9095ac4932ba9885def0a3f871-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp', 'dm-uuid-CRYPT-LUKS2-8337882314d24928900767488abc99a7-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890446 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:38.890466 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'uuids': ['0cb92a90-95ac-4932-ba98-85def0a3f871'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:38.890525 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd99642af', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:47.856141 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:47.856263 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:47.856322 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP', 'dm-uuid-CRYPT-LUKS2-22ded8c89142404ca572856e0a8f4fba-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:31:47.856346 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:47.856368 | orchestrator | 2026-04-06 05:31:47.856389 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:31:47.856410 | orchestrator | Monday 06 April 2026 05:31:39 +0000 (0:00:00.516) 0:24:08.756 ********** 2026-04-06 05:31:47.856430 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:47.856450 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:47.856469 | orchestrator | 2026-04-06 05:31:47.856485 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:31:47.856496 | orchestrator | Monday 06 April 2026 05:31:39 +0000 (0:00:00.609) 0:24:09.365 ********** 2026-04-06 05:31:47.856507 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:47.856518 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:47.856529 | orchestrator | 2026-04-06 05:31:47.856540 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:31:47.856550 | orchestrator | Monday 06 April 2026 05:31:39 +0000 (0:00:00.237) 0:24:09.602 ********** 2026-04-06 05:31:47.856561 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:47.856572 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:47.856585 | orchestrator | 2026-04-06 05:31:47.856598 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:31:47.856612 | orchestrator | Monday 06 April 2026 05:31:40 +0000 (0:00:00.567) 0:24:10.170 ********** 2026-04-06 05:31:47.856624 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:47.856637 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:47.856649 | orchestrator | 2026-04-06 05:31:47.856662 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:31:47.856674 | orchestrator | Monday 06 April 2026 05:31:41 +0000 (0:00:00.552) 0:24:10.723 ********** 2026-04-06 05:31:47.856687 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:47.856700 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:47.856713 | orchestrator | 2026-04-06 05:31:47.856725 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:31:47.856738 | orchestrator | Monday 06 April 2026 05:31:41 +0000 (0:00:00.353) 0:24:11.076 ********** 2026-04-06 05:31:47.856750 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:47.856762 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:47.856776 | orchestrator | 2026-04-06 05:31:47.856788 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:31:47.856801 | orchestrator | Monday 06 April 2026 05:31:41 +0000 (0:00:00.244) 0:24:11.321 ********** 2026-04-06 05:31:47.856814 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-06 05:31:47.856827 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-06 05:31:47.856840 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-06 05:31:47.856853 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-06 05:31:47.856865 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-06 05:31:47.856883 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-06 05:31:47.856907 | orchestrator | 2026-04-06 05:31:47.856934 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:31:47.856968 | orchestrator | Monday 06 April 2026 05:31:42 +0000 (0:00:00.784) 0:24:12.106 ********** 2026-04-06 05:31:47.857013 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-06 05:31:47.857044 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-06 05:31:47.857064 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-06 05:31:47.857110 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:47.857129 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-06 05:31:47.857144 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-06 05:31:47.857155 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-06 05:31:47.857166 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:47.857177 | orchestrator | 2026-04-06 05:31:47.857187 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:31:47.857198 | orchestrator | Monday 06 April 2026 05:31:42 +0000 (0:00:00.278) 0:24:12.385 ********** 2026-04-06 05:31:47.857209 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-5 2026-04-06 05:31:47.857221 | orchestrator | 2026-04-06 05:31:47.857232 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:31:47.857245 | orchestrator | Monday 06 April 2026 05:31:43 +0000 (0:00:00.713) 0:24:13.098 ********** 2026-04-06 05:31:47.857255 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:47.857266 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:47.857277 | orchestrator | 2026-04-06 05:31:47.857287 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:31:47.857298 | orchestrator | Monday 06 April 2026 05:31:43 +0000 (0:00:00.242) 0:24:13.341 ********** 2026-04-06 05:31:47.857309 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:47.857319 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:47.857330 | orchestrator | 2026-04-06 05:31:47.857341 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:31:47.857351 | orchestrator | Monday 06 April 2026 05:31:43 +0000 (0:00:00.231) 0:24:13.572 ********** 2026-04-06 05:31:47.857362 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:47.857372 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:31:47.857383 | orchestrator | 2026-04-06 05:31:47.857393 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:31:47.857404 | orchestrator | Monday 06 April 2026 05:31:44 +0000 (0:00:00.262) 0:24:13.835 ********** 2026-04-06 05:31:47.857415 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:47.857426 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:47.857436 | orchestrator | 2026-04-06 05:31:47.857447 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:31:47.857458 | orchestrator | Monday 06 April 2026 05:31:44 +0000 (0:00:00.401) 0:24:14.237 ********** 2026-04-06 05:31:47.857468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:31:47.857479 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:31:47.857489 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:31:47.857500 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:47.857511 | orchestrator | 2026-04-06 05:31:47.857522 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:31:47.857533 | orchestrator | Monday 06 April 2026 05:31:44 +0000 (0:00:00.388) 0:24:14.625 ********** 2026-04-06 05:31:47.857584 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:31:47.857596 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:31:47.857606 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:31:47.857617 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:47.857628 | orchestrator | 2026-04-06 05:31:47.857638 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:31:47.857659 | orchestrator | Monday 06 April 2026 05:31:45 +0000 (0:00:00.396) 0:24:15.021 ********** 2026-04-06 05:31:47.857670 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:31:47.857680 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:31:47.857691 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:31:47.857701 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:31:47.857712 | orchestrator | 2026-04-06 05:31:47.857723 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:31:47.857733 | orchestrator | Monday 06 April 2026 05:31:46 +0000 (0:00:00.797) 0:24:15.819 ********** 2026-04-06 05:31:47.857744 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:31:47.857755 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:31:47.857765 | orchestrator | 2026-04-06 05:31:47.857776 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:31:47.857787 | orchestrator | Monday 06 April 2026 05:31:46 +0000 (0:00:00.555) 0:24:16.375 ********** 2026-04-06 05:31:47.857797 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 05:31:47.857809 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-06 05:31:47.857819 | orchestrator | 2026-04-06 05:31:47.857830 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:31:47.857841 | orchestrator | Monday 06 April 2026 05:31:47 +0000 (0:00:00.449) 0:24:16.825 ********** 2026-04-06 05:31:47.857851 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:31:47.857862 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:31:47.857873 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:31:47.857883 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:31:47.857894 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-06 05:31:47.857905 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:31:47.857924 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:32:01.082134 | orchestrator | 2026-04-06 05:32:01.082299 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:32:01.082315 | orchestrator | Monday 06 April 2026 05:31:47 +0000 (0:00:00.829) 0:24:17.654 ********** 2026-04-06 05:32:01.082326 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:32:01.082337 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:32:01.082347 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:32:01.082357 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:32:01.082368 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-06 05:32:01.082379 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:32:01.082388 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:32:01.082398 | orchestrator | 2026-04-06 05:32:01.082408 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-04-06 05:32:01.082418 | orchestrator | Monday 06 April 2026 05:31:49 +0000 (0:00:01.731) 0:24:19.386 ********** 2026-04-06 05:32:01.082428 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.082439 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.082448 | orchestrator | 2026-04-06 05:32:01.082458 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:32:01.082468 | orchestrator | Monday 06 April 2026 05:31:49 +0000 (0:00:00.229) 0:24:19.616 ********** 2026-04-06 05:32:01.082478 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-5 2026-04-06 05:32:01.082512 | orchestrator | 2026-04-06 05:32:01.082522 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:32:01.082534 | orchestrator | Monday 06 April 2026 05:31:50 +0000 (0:00:00.381) 0:24:19.997 ********** 2026-04-06 05:32:01.082546 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-5 2026-04-06 05:32:01.082558 | orchestrator | 2026-04-06 05:32:01.082569 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:32:01.082581 | orchestrator | Monday 06 April 2026 05:31:50 +0000 (0:00:00.680) 0:24:20.678 ********** 2026-04-06 05:32:01.082592 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.082604 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.082616 | orchestrator | 2026-04-06 05:32:01.082628 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:32:01.082639 | orchestrator | Monday 06 April 2026 05:31:51 +0000 (0:00:00.224) 0:24:20.902 ********** 2026-04-06 05:32:01.082651 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:01.082662 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:01.082674 | orchestrator | 2026-04-06 05:32:01.082686 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:32:01.082697 | orchestrator | Monday 06 April 2026 05:31:51 +0000 (0:00:00.569) 0:24:21.471 ********** 2026-04-06 05:32:01.082708 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:01.082720 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:01.082730 | orchestrator | 2026-04-06 05:32:01.082742 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:32:01.082754 | orchestrator | Monday 06 April 2026 05:31:52 +0000 (0:00:00.658) 0:24:22.130 ********** 2026-04-06 05:32:01.082765 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:01.082777 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:01.082789 | orchestrator | 2026-04-06 05:32:01.082799 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:32:01.082811 | orchestrator | Monday 06 April 2026 05:31:53 +0000 (0:00:00.660) 0:24:22.790 ********** 2026-04-06 05:32:01.082823 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.082834 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.082846 | orchestrator | 2026-04-06 05:32:01.082858 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:32:01.082870 | orchestrator | Monday 06 April 2026 05:31:53 +0000 (0:00:00.217) 0:24:23.007 ********** 2026-04-06 05:32:01.082881 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.082894 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.082905 | orchestrator | 2026-04-06 05:32:01.082915 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:32:01.082925 | orchestrator | Monday 06 April 2026 05:31:53 +0000 (0:00:00.535) 0:24:23.543 ********** 2026-04-06 05:32:01.082935 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.082944 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.082954 | orchestrator | 2026-04-06 05:32:01.082963 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:32:01.082973 | orchestrator | Monday 06 April 2026 05:31:54 +0000 (0:00:00.214) 0:24:23.757 ********** 2026-04-06 05:32:01.082983 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:01.082992 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:01.083002 | orchestrator | 2026-04-06 05:32:01.083012 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:32:01.083021 | orchestrator | Monday 06 April 2026 05:31:54 +0000 (0:00:00.612) 0:24:24.370 ********** 2026-04-06 05:32:01.083031 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:01.083058 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:01.083068 | orchestrator | 2026-04-06 05:32:01.083078 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:32:01.083088 | orchestrator | Monday 06 April 2026 05:31:55 +0000 (0:00:00.633) 0:24:25.004 ********** 2026-04-06 05:32:01.083105 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083115 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083125 | orchestrator | 2026-04-06 05:32:01.083135 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:32:01.083144 | orchestrator | Monday 06 April 2026 05:31:55 +0000 (0:00:00.241) 0:24:25.245 ********** 2026-04-06 05:32:01.083154 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083183 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083194 | orchestrator | 2026-04-06 05:32:01.083209 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:32:01.083219 | orchestrator | Monday 06 April 2026 05:31:55 +0000 (0:00:00.228) 0:24:25.474 ********** 2026-04-06 05:32:01.083229 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:01.083239 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:01.083248 | orchestrator | 2026-04-06 05:32:01.083258 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:32:01.083267 | orchestrator | Monday 06 April 2026 05:31:56 +0000 (0:00:00.588) 0:24:26.063 ********** 2026-04-06 05:32:01.083277 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:01.083287 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:01.083296 | orchestrator | 2026-04-06 05:32:01.083306 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:32:01.083316 | orchestrator | Monday 06 April 2026 05:31:56 +0000 (0:00:00.249) 0:24:26.312 ********** 2026-04-06 05:32:01.083325 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:01.083335 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:01.083345 | orchestrator | 2026-04-06 05:32:01.083354 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:32:01.083364 | orchestrator | Monday 06 April 2026 05:31:56 +0000 (0:00:00.242) 0:24:26.554 ********** 2026-04-06 05:32:01.083374 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083383 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083393 | orchestrator | 2026-04-06 05:32:01.083402 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:32:01.083412 | orchestrator | Monday 06 April 2026 05:31:57 +0000 (0:00:00.238) 0:24:26.793 ********** 2026-04-06 05:32:01.083422 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083432 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083441 | orchestrator | 2026-04-06 05:32:01.083451 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:32:01.083461 | orchestrator | Monday 06 April 2026 05:31:57 +0000 (0:00:00.220) 0:24:27.014 ********** 2026-04-06 05:32:01.083470 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083480 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083489 | orchestrator | 2026-04-06 05:32:01.083499 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:32:01.083509 | orchestrator | Monday 06 April 2026 05:31:57 +0000 (0:00:00.222) 0:24:27.237 ********** 2026-04-06 05:32:01.083518 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:01.083528 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:01.083538 | orchestrator | 2026-04-06 05:32:01.083547 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:32:01.083557 | orchestrator | Monday 06 April 2026 05:31:57 +0000 (0:00:00.249) 0:24:27.486 ********** 2026-04-06 05:32:01.083567 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:01.083576 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:01.083586 | orchestrator | 2026-04-06 05:32:01.083595 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:32:01.083605 | orchestrator | Monday 06 April 2026 05:31:58 +0000 (0:00:00.815) 0:24:28.302 ********** 2026-04-06 05:32:01.083615 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083624 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083634 | orchestrator | 2026-04-06 05:32:01.083644 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:32:01.083653 | orchestrator | Monday 06 April 2026 05:31:58 +0000 (0:00:00.225) 0:24:28.527 ********** 2026-04-06 05:32:01.083669 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083679 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083689 | orchestrator | 2026-04-06 05:32:01.083698 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:32:01.083708 | orchestrator | Monday 06 April 2026 05:31:59 +0000 (0:00:00.227) 0:24:28.754 ********** 2026-04-06 05:32:01.083718 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083727 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083737 | orchestrator | 2026-04-06 05:32:01.083747 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:32:01.083756 | orchestrator | Monday 06 April 2026 05:31:59 +0000 (0:00:00.238) 0:24:28.993 ********** 2026-04-06 05:32:01.083766 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083775 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083785 | orchestrator | 2026-04-06 05:32:01.083795 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:32:01.083805 | orchestrator | Monday 06 April 2026 05:31:59 +0000 (0:00:00.253) 0:24:29.246 ********** 2026-04-06 05:32:01.083814 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083824 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083833 | orchestrator | 2026-04-06 05:32:01.083843 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:32:01.083852 | orchestrator | Monday 06 April 2026 05:31:59 +0000 (0:00:00.214) 0:24:29.461 ********** 2026-04-06 05:32:01.083862 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083872 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083881 | orchestrator | 2026-04-06 05:32:01.083891 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:32:01.083901 | orchestrator | Monday 06 April 2026 05:32:00 +0000 (0:00:00.588) 0:24:30.050 ********** 2026-04-06 05:32:01.083910 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083920 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083929 | orchestrator | 2026-04-06 05:32:01.083939 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:32:01.083949 | orchestrator | Monday 06 April 2026 05:32:00 +0000 (0:00:00.247) 0:24:30.298 ********** 2026-04-06 05:32:01.083958 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.083968 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.083978 | orchestrator | 2026-04-06 05:32:01.083987 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:32:01.083997 | orchestrator | Monday 06 April 2026 05:32:00 +0000 (0:00:00.239) 0:24:30.537 ********** 2026-04-06 05:32:01.084006 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:01.084016 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:01.084026 | orchestrator | 2026-04-06 05:32:01.084055 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:32:15.715819 | orchestrator | Monday 06 April 2026 05:32:01 +0000 (0:00:00.252) 0:24:30.790 ********** 2026-04-06 05:32:15.715935 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.715951 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.715963 | orchestrator | 2026-04-06 05:32:15.715974 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:32:15.715984 | orchestrator | Monday 06 April 2026 05:32:01 +0000 (0:00:00.224) 0:24:31.014 ********** 2026-04-06 05:32:15.716053 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.716064 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.716074 | orchestrator | 2026-04-06 05:32:15.716084 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:32:15.716095 | orchestrator | Monday 06 April 2026 05:32:01 +0000 (0:00:00.212) 0:24:31.227 ********** 2026-04-06 05:32:15.716105 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.716115 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.716125 | orchestrator | 2026-04-06 05:32:15.716135 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:32:15.716164 | orchestrator | Monday 06 April 2026 05:32:02 +0000 (0:00:00.689) 0:24:31.916 ********** 2026-04-06 05:32:15.716174 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:15.716185 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:15.716195 | orchestrator | 2026-04-06 05:32:15.716204 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:32:15.716215 | orchestrator | Monday 06 April 2026 05:32:03 +0000 (0:00:01.013) 0:24:32.929 ********** 2026-04-06 05:32:15.716225 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:15.716234 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:15.716244 | orchestrator | 2026-04-06 05:32:15.716254 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:32:15.716264 | orchestrator | Monday 06 April 2026 05:32:04 +0000 (0:00:01.292) 0:24:34.222 ********** 2026-04-06 05:32:15.716274 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-5 2026-04-06 05:32:15.716284 | orchestrator | 2026-04-06 05:32:15.716294 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:32:15.716304 | orchestrator | Monday 06 April 2026 05:32:04 +0000 (0:00:00.404) 0:24:34.626 ********** 2026-04-06 05:32:15.716314 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.716323 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.716333 | orchestrator | 2026-04-06 05:32:15.716345 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:32:15.716356 | orchestrator | Monday 06 April 2026 05:32:05 +0000 (0:00:00.233) 0:24:34.860 ********** 2026-04-06 05:32:15.716368 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.716379 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.716391 | orchestrator | 2026-04-06 05:32:15.716402 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:32:15.716414 | orchestrator | Monday 06 April 2026 05:32:05 +0000 (0:00:00.589) 0:24:35.450 ********** 2026-04-06 05:32:15.716425 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:32:15.716436 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:32:15.716447 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:32:15.716459 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:32:15.716470 | orchestrator | 2026-04-06 05:32:15.716480 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:32:15.716491 | orchestrator | Monday 06 April 2026 05:32:06 +0000 (0:00:00.921) 0:24:36.371 ********** 2026-04-06 05:32:15.716502 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:15.716514 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:15.716525 | orchestrator | 2026-04-06 05:32:15.716536 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:32:15.716547 | orchestrator | Monday 06 April 2026 05:32:07 +0000 (0:00:00.616) 0:24:36.988 ********** 2026-04-06 05:32:15.716559 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.716570 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.716581 | orchestrator | 2026-04-06 05:32:15.716593 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:32:15.716604 | orchestrator | Monday 06 April 2026 05:32:07 +0000 (0:00:00.258) 0:24:37.246 ********** 2026-04-06 05:32:15.716615 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.716627 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.716638 | orchestrator | 2026-04-06 05:32:15.716649 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:32:15.716660 | orchestrator | Monday 06 April 2026 05:32:07 +0000 (0:00:00.269) 0:24:37.515 ********** 2026-04-06 05:32:15.716671 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.716683 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.716702 | orchestrator | 2026-04-06 05:32:15.716712 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:32:15.716721 | orchestrator | Monday 06 April 2026 05:32:08 +0000 (0:00:00.292) 0:24:37.808 ********** 2026-04-06 05:32:15.716731 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-5 2026-04-06 05:32:15.716741 | orchestrator | 2026-04-06 05:32:15.716751 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:32:15.716760 | orchestrator | Monday 06 April 2026 05:32:08 +0000 (0:00:00.720) 0:24:38.529 ********** 2026-04-06 05:32:15.716770 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:15.716780 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:15.716789 | orchestrator | 2026-04-06 05:32:15.716799 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:32:15.716809 | orchestrator | Monday 06 April 2026 05:32:09 +0000 (0:00:00.779) 0:24:39.309 ********** 2026-04-06 05:32:15.716819 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:32:15.716850 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:32:15.716861 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:32:15.716871 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.716880 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:32:15.716890 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:32:15.716899 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:32:15.716909 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.716918 | orchestrator | 2026-04-06 05:32:15.716928 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:32:15.716938 | orchestrator | Monday 06 April 2026 05:32:09 +0000 (0:00:00.268) 0:24:39.577 ********** 2026-04-06 05:32:15.716947 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.716957 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.716966 | orchestrator | 2026-04-06 05:32:15.716976 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:32:15.716986 | orchestrator | Monday 06 April 2026 05:32:10 +0000 (0:00:00.234) 0:24:39.812 ********** 2026-04-06 05:32:15.717013 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717023 | orchestrator | 2026-04-06 05:32:15.717033 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:32:15.717042 | orchestrator | Monday 06 April 2026 05:32:10 +0000 (0:00:00.188) 0:24:40.001 ********** 2026-04-06 05:32:15.717052 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717062 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.717071 | orchestrator | 2026-04-06 05:32:15.717081 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:32:15.717090 | orchestrator | Monday 06 April 2026 05:32:10 +0000 (0:00:00.279) 0:24:40.280 ********** 2026-04-06 05:32:15.717100 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717109 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.717119 | orchestrator | 2026-04-06 05:32:15.717129 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:32:15.717138 | orchestrator | Monday 06 April 2026 05:32:10 +0000 (0:00:00.236) 0:24:40.517 ********** 2026-04-06 05:32:15.717148 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717158 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.717167 | orchestrator | 2026-04-06 05:32:15.717177 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:32:15.717186 | orchestrator | Monday 06 April 2026 05:32:11 +0000 (0:00:00.588) 0:24:41.106 ********** 2026-04-06 05:32:15.717196 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:15.717205 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:15.717215 | orchestrator | 2026-04-06 05:32:15.717231 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:32:15.717241 | orchestrator | Monday 06 April 2026 05:32:12 +0000 (0:00:01.496) 0:24:42.602 ********** 2026-04-06 05:32:15.717251 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:15.717260 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:15.717270 | orchestrator | 2026-04-06 05:32:15.717279 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:32:15.717289 | orchestrator | Monday 06 April 2026 05:32:13 +0000 (0:00:00.253) 0:24:42.855 ********** 2026-04-06 05:32:15.717298 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-5 2026-04-06 05:32:15.717309 | orchestrator | 2026-04-06 05:32:15.717319 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:32:15.717328 | orchestrator | Monday 06 April 2026 05:32:13 +0000 (0:00:00.398) 0:24:43.254 ********** 2026-04-06 05:32:15.717338 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717348 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.717357 | orchestrator | 2026-04-06 05:32:15.717367 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:32:15.717377 | orchestrator | Monday 06 April 2026 05:32:13 +0000 (0:00:00.247) 0:24:43.501 ********** 2026-04-06 05:32:15.717386 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717396 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.717405 | orchestrator | 2026-04-06 05:32:15.717415 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:32:15.717425 | orchestrator | Monday 06 April 2026 05:32:14 +0000 (0:00:00.594) 0:24:44.095 ********** 2026-04-06 05:32:15.717434 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717444 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.717454 | orchestrator | 2026-04-06 05:32:15.717463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:32:15.717473 | orchestrator | Monday 06 April 2026 05:32:14 +0000 (0:00:00.273) 0:24:44.369 ********** 2026-04-06 05:32:15.717482 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717492 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.717501 | orchestrator | 2026-04-06 05:32:15.717511 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:32:15.717521 | orchestrator | Monday 06 April 2026 05:32:14 +0000 (0:00:00.261) 0:24:44.631 ********** 2026-04-06 05:32:15.717530 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717540 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.717550 | orchestrator | 2026-04-06 05:32:15.717559 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:32:15.717569 | orchestrator | Monday 06 April 2026 05:32:15 +0000 (0:00:00.239) 0:24:44.871 ********** 2026-04-06 05:32:15.717578 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717588 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.717598 | orchestrator | 2026-04-06 05:32:15.717607 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:32:15.717617 | orchestrator | Monday 06 April 2026 05:32:15 +0000 (0:00:00.260) 0:24:45.131 ********** 2026-04-06 05:32:15.717627 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:15.717636 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:15.717646 | orchestrator | 2026-04-06 05:32:15.717662 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:32:34.256766 | orchestrator | Monday 06 April 2026 05:32:15 +0000 (0:00:00.292) 0:24:45.424 ********** 2026-04-06 05:32:34.256873 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.256892 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.256906 | orchestrator | 2026-04-06 05:32:34.256919 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:32:34.256931 | orchestrator | Monday 06 April 2026 05:32:15 +0000 (0:00:00.248) 0:24:45.672 ********** 2026-04-06 05:32:34.257002 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:34.257034 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:34.257041 | orchestrator | 2026-04-06 05:32:34.257049 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:32:34.257056 | orchestrator | Monday 06 April 2026 05:32:16 +0000 (0:00:00.722) 0:24:46.395 ********** 2026-04-06 05:32:34.257064 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-5 2026-04-06 05:32:34.257072 | orchestrator | 2026-04-06 05:32:34.257079 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:32:34.257087 | orchestrator | Monday 06 April 2026 05:32:17 +0000 (0:00:00.392) 0:24:46.788 ********** 2026-04-06 05:32:34.257094 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-06 05:32:34.257102 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-06 05:32:34.257109 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-06 05:32:34.257116 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-06 05:32:34.257124 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-06 05:32:34.257131 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-06 05:32:34.257138 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-06 05:32:34.257146 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-06 05:32:34.257153 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-06 05:32:34.257160 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-06 05:32:34.257167 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-06 05:32:34.257174 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-06 05:32:34.257182 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-06 05:32:34.257189 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-06 05:32:34.257196 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:32:34.257204 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:32:34.257211 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:32:34.257219 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:32:34.257226 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:32:34.257233 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:32:34.257241 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:32:34.257248 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:32:34.257255 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:32:34.257262 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:32:34.257269 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:32:34.257277 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:32:34.257284 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:32:34.257291 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:32:34.257298 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-06 05:32:34.257306 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-06 05:32:34.257316 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-06 05:32:34.257326 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-06 05:32:34.257337 | orchestrator | 2026-04-06 05:32:34.257347 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:32:34.257357 | orchestrator | Monday 06 April 2026 05:32:22 +0000 (0:00:05.745) 0:24:52.534 ********** 2026-04-06 05:32:34.257367 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-5 2026-04-06 05:32:34.257384 | orchestrator | 2026-04-06 05:32:34.257394 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-06 05:32:34.257404 | orchestrator | Monday 06 April 2026 05:32:23 +0000 (0:00:00.663) 0:24:53.198 ********** 2026-04-06 05:32:34.257415 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:32:34.257428 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:32:34.257438 | orchestrator | 2026-04-06 05:32:34.257448 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-06 05:32:34.257458 | orchestrator | Monday 06 April 2026 05:32:24 +0000 (0:00:00.599) 0:24:53.797 ********** 2026-04-06 05:32:34.257469 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:32:34.257479 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:32:34.257490 | orchestrator | 2026-04-06 05:32:34.257501 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:32:34.257536 | orchestrator | Monday 06 April 2026 05:32:25 +0000 (0:00:01.115) 0:24:54.913 ********** 2026-04-06 05:32:34.257545 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.257554 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.257563 | orchestrator | 2026-04-06 05:32:34.257571 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:32:34.257580 | orchestrator | Monday 06 April 2026 05:32:25 +0000 (0:00:00.237) 0:24:55.150 ********** 2026-04-06 05:32:34.257589 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.257597 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.257606 | orchestrator | 2026-04-06 05:32:34.257614 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:32:34.257623 | orchestrator | Monday 06 April 2026 05:32:25 +0000 (0:00:00.250) 0:24:55.401 ********** 2026-04-06 05:32:34.257632 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.257640 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.257649 | orchestrator | 2026-04-06 05:32:34.257658 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:32:34.257666 | orchestrator | Monday 06 April 2026 05:32:25 +0000 (0:00:00.254) 0:24:55.656 ********** 2026-04-06 05:32:34.257675 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.257684 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.257692 | orchestrator | 2026-04-06 05:32:34.257701 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:32:34.257709 | orchestrator | Monday 06 April 2026 05:32:26 +0000 (0:00:00.230) 0:24:55.887 ********** 2026-04-06 05:32:34.257718 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.257727 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.257735 | orchestrator | 2026-04-06 05:32:34.257744 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:32:34.257753 | orchestrator | Monday 06 April 2026 05:32:26 +0000 (0:00:00.569) 0:24:56.456 ********** 2026-04-06 05:32:34.257762 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.257770 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.257779 | orchestrator | 2026-04-06 05:32:34.257788 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:32:34.257797 | orchestrator | Monday 06 April 2026 05:32:26 +0000 (0:00:00.228) 0:24:56.685 ********** 2026-04-06 05:32:34.257805 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.257814 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.257823 | orchestrator | 2026-04-06 05:32:34.257831 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:32:34.257840 | orchestrator | Monday 06 April 2026 05:32:27 +0000 (0:00:00.238) 0:24:56.923 ********** 2026-04-06 05:32:34.257854 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.257863 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.257871 | orchestrator | 2026-04-06 05:32:34.257880 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:32:34.257888 | orchestrator | Monday 06 April 2026 05:32:27 +0000 (0:00:00.222) 0:24:57.146 ********** 2026-04-06 05:32:34.257897 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.257906 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.257914 | orchestrator | 2026-04-06 05:32:34.257923 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:32:34.257931 | orchestrator | Monday 06 April 2026 05:32:27 +0000 (0:00:00.235) 0:24:57.382 ********** 2026-04-06 05:32:34.257961 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.257979 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.257994 | orchestrator | 2026-04-06 05:32:34.258008 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:32:34.258081 | orchestrator | Monday 06 April 2026 05:32:27 +0000 (0:00:00.213) 0:24:57.595 ********** 2026-04-06 05:32:34.258097 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:34.258111 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:34.258125 | orchestrator | 2026-04-06 05:32:34.258140 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:32:34.258154 | orchestrator | Monday 06 April 2026 05:32:28 +0000 (0:00:00.239) 0:24:57.834 ********** 2026-04-06 05:32:34.258169 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:32:34.258184 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:32:34.258199 | orchestrator | 2026-04-06 05:32:34.258209 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:32:34.258218 | orchestrator | Monday 06 April 2026 05:32:31 +0000 (0:00:03.726) 0:25:01.561 ********** 2026-04-06 05:32:34.258226 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:32:34.258235 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:32:34.258244 | orchestrator | 2026-04-06 05:32:34.258253 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:32:34.258261 | orchestrator | Monday 06 April 2026 05:32:32 +0000 (0:00:00.316) 0:25:01.878 ********** 2026-04-06 05:32:34.258272 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-06 05:32:34.258300 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-06 05:32:58.672247 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-06 05:32:58.672343 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-06 05:32:58.672375 | orchestrator | 2026-04-06 05:32:58.672386 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:32:58.672395 | orchestrator | Monday 06 April 2026 05:32:36 +0000 (0:00:04.041) 0:25:05.919 ********** 2026-04-06 05:32:58.672404 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.672413 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:58.672421 | orchestrator | 2026-04-06 05:32:58.672429 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:32:58.672437 | orchestrator | Monday 06 April 2026 05:32:36 +0000 (0:00:00.232) 0:25:06.152 ********** 2026-04-06 05:32:58.672445 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.672453 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:58.672461 | orchestrator | 2026-04-06 05:32:58.672469 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:32:58.672479 | orchestrator | Monday 06 April 2026 05:32:36 +0000 (0:00:00.231) 0:25:06.383 ********** 2026-04-06 05:32:58.672488 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.672496 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:58.672504 | orchestrator | 2026-04-06 05:32:58.672512 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:32:58.672520 | orchestrator | Monday 06 April 2026 05:32:36 +0000 (0:00:00.282) 0:25:06.666 ********** 2026-04-06 05:32:58.672528 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.672535 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:58.672543 | orchestrator | 2026-04-06 05:32:58.672551 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:32:58.672559 | orchestrator | Monday 06 April 2026 05:32:37 +0000 (0:00:00.607) 0:25:07.274 ********** 2026-04-06 05:32:58.672567 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.672575 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:58.672583 | orchestrator | 2026-04-06 05:32:58.672591 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:32:58.672599 | orchestrator | Monday 06 April 2026 05:32:37 +0000 (0:00:00.256) 0:25:07.530 ********** 2026-04-06 05:32:58.672607 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:58.672616 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:58.672623 | orchestrator | 2026-04-06 05:32:58.672631 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:32:58.672639 | orchestrator | Monday 06 April 2026 05:32:38 +0000 (0:00:00.360) 0:25:07.891 ********** 2026-04-06 05:32:58.672647 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:32:58.672655 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:32:58.672663 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:32:58.672671 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.672678 | orchestrator | 2026-04-06 05:32:58.672686 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:32:58.672694 | orchestrator | Monday 06 April 2026 05:32:38 +0000 (0:00:00.491) 0:25:08.383 ********** 2026-04-06 05:32:58.672702 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:32:58.672710 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:32:58.672718 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:32:58.672725 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.672733 | orchestrator | 2026-04-06 05:32:58.672741 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:32:58.672749 | orchestrator | Monday 06 April 2026 05:32:39 +0000 (0:00:00.452) 0:25:08.836 ********** 2026-04-06 05:32:58.672757 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:32:58.672765 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:32:58.672772 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:32:58.672786 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.672795 | orchestrator | 2026-04-06 05:32:58.672802 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:32:58.672810 | orchestrator | Monday 06 April 2026 05:32:39 +0000 (0:00:00.469) 0:25:09.306 ********** 2026-04-06 05:32:58.672818 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:58.672826 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:58.672834 | orchestrator | 2026-04-06 05:32:58.672842 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:32:58.672850 | orchestrator | Monday 06 April 2026 05:32:39 +0000 (0:00:00.246) 0:25:09.552 ********** 2026-04-06 05:32:58.672857 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 05:32:58.672866 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-06 05:32:58.672873 | orchestrator | 2026-04-06 05:32:58.672907 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:32:58.672915 | orchestrator | Monday 06 April 2026 05:32:40 +0000 (0:00:00.913) 0:25:10.466 ********** 2026-04-06 05:32:58.672936 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:58.672944 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:58.672952 | orchestrator | 2026-04-06 05:32:58.672974 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-06 05:32:58.672982 | orchestrator | Monday 06 April 2026 05:32:41 +0000 (0:00:01.005) 0:25:11.471 ********** 2026-04-06 05:32:58.672990 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.672998 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:58.673030 | orchestrator | 2026-04-06 05:32:58.673038 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-06 05:32:58.673046 | orchestrator | Monday 06 April 2026 05:32:42 +0000 (0:00:00.257) 0:25:11.729 ********** 2026-04-06 05:32:58.673054 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-5 2026-04-06 05:32:58.673062 | orchestrator | 2026-04-06 05:32:58.673070 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-06 05:32:58.673078 | orchestrator | Monday 06 April 2026 05:32:42 +0000 (0:00:00.344) 0:25:12.074 ********** 2026-04-06 05:32:58.673086 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-06 05:32:58.673093 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-06 05:32:58.673101 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-06 05:32:58.673109 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-06 05:32:58.673116 | orchestrator | 2026-04-06 05:32:58.673124 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-06 05:32:58.673132 | orchestrator | Monday 06 April 2026 05:32:43 +0000 (0:00:00.993) 0:25:13.067 ********** 2026-04-06 05:32:58.673140 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:32:58.673148 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-06 05:32:58.673156 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:32:58.673164 | orchestrator | 2026-04-06 05:32:58.673171 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:32:58.673179 | orchestrator | Monday 06 April 2026 05:32:45 +0000 (0:00:02.601) 0:25:15.668 ********** 2026-04-06 05:32:58.673187 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-06 05:32:58.673195 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-06 05:32:58.673203 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:58.673211 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-06 05:32:58.673219 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-06 05:32:58.673227 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:58.673235 | orchestrator | 2026-04-06 05:32:58.673243 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-06 05:32:58.673250 | orchestrator | Monday 06 April 2026 05:32:47 +0000 (0:00:01.469) 0:25:17.138 ********** 2026-04-06 05:32:58.673264 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:58.673272 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:58.673280 | orchestrator | 2026-04-06 05:32:58.673288 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-06 05:32:58.673296 | orchestrator | Monday 06 April 2026 05:32:48 +0000 (0:00:00.623) 0:25:17.762 ********** 2026-04-06 05:32:58.673303 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.673311 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:32:58.673319 | orchestrator | 2026-04-06 05:32:58.673327 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-06 05:32:58.673335 | orchestrator | Monday 06 April 2026 05:32:48 +0000 (0:00:00.221) 0:25:17.984 ********** 2026-04-06 05:32:58.673342 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-5 2026-04-06 05:32:58.673351 | orchestrator | 2026-04-06 05:32:58.673363 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-06 05:32:58.673376 | orchestrator | Monday 06 April 2026 05:32:48 +0000 (0:00:00.394) 0:25:18.378 ********** 2026-04-06 05:32:58.673390 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-5 2026-04-06 05:32:58.673403 | orchestrator | 2026-04-06 05:32:58.673416 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-06 05:32:58.673429 | orchestrator | Monday 06 April 2026 05:32:49 +0000 (0:00:00.627) 0:25:19.006 ********** 2026-04-06 05:32:58.673438 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:58.673446 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:58.673454 | orchestrator | 2026-04-06 05:32:58.673462 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-06 05:32:58.673470 | orchestrator | Monday 06 April 2026 05:32:50 +0000 (0:00:01.124) 0:25:20.130 ********** 2026-04-06 05:32:58.673478 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:58.673486 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:58.673494 | orchestrator | 2026-04-06 05:32:58.673502 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-06 05:32:58.673509 | orchestrator | Monday 06 April 2026 05:32:51 +0000 (0:00:01.048) 0:25:21.179 ********** 2026-04-06 05:32:58.673517 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:58.673525 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:58.673533 | orchestrator | 2026-04-06 05:32:58.673541 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-06 05:32:58.673549 | orchestrator | Monday 06 April 2026 05:32:52 +0000 (0:00:01.396) 0:25:22.575 ********** 2026-04-06 05:32:58.673557 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:32:58.673565 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:32:58.673573 | orchestrator | 2026-04-06 05:32:58.673581 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-06 05:32:58.673589 | orchestrator | Monday 06 April 2026 05:32:55 +0000 (0:00:02.480) 0:25:25.056 ********** 2026-04-06 05:32:58.673597 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:32:58.673605 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:32:58.673613 | orchestrator | 2026-04-06 05:32:58.673621 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-04-06 05:32:58.673634 | orchestrator | Monday 06 April 2026 05:32:56 +0000 (0:00:00.841) 0:25:25.897 ********** 2026-04-06 05:32:58.673642 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:32:58.673655 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:33:06.353493 | orchestrator | 2026-04-06 05:33:06.353631 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-06 05:33:06.353661 | orchestrator | 2026-04-06 05:33:06.353682 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:33:06.353703 | orchestrator | Monday 06 April 2026 05:32:59 +0000 (0:00:03.067) 0:25:28.965 ********** 2026-04-06 05:33:06.353724 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-06 05:33:06.353744 | orchestrator | 2026-04-06 05:33:06.353792 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:33:06.353812 | orchestrator | Monday 06 April 2026 05:32:59 +0000 (0:00:00.315) 0:25:29.280 ********** 2026-04-06 05:33:06.353830 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:06.353850 | orchestrator | 2026-04-06 05:33:06.353908 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:33:06.353927 | orchestrator | Monday 06 April 2026 05:33:00 +0000 (0:00:00.447) 0:25:29.727 ********** 2026-04-06 05:33:06.353946 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:06.353967 | orchestrator | 2026-04-06 05:33:06.353986 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:33:06.354009 | orchestrator | Monday 06 April 2026 05:33:00 +0000 (0:00:00.151) 0:25:29.879 ********** 2026-04-06 05:33:06.354121 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:06.354144 | orchestrator | 2026-04-06 05:33:06.354165 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:33:06.354185 | orchestrator | Monday 06 April 2026 05:33:00 +0000 (0:00:00.437) 0:25:30.317 ********** 2026-04-06 05:33:06.354205 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:06.354225 | orchestrator | 2026-04-06 05:33:06.354244 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:33:06.354265 | orchestrator | Monday 06 April 2026 05:33:00 +0000 (0:00:00.180) 0:25:30.498 ********** 2026-04-06 05:33:06.354286 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:06.354308 | orchestrator | 2026-04-06 05:33:06.354329 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:33:06.354352 | orchestrator | Monday 06 April 2026 05:33:00 +0000 (0:00:00.142) 0:25:30.640 ********** 2026-04-06 05:33:06.354370 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:06.354388 | orchestrator | 2026-04-06 05:33:06.354406 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:33:06.354427 | orchestrator | Monday 06 April 2026 05:33:01 +0000 (0:00:00.158) 0:25:30.799 ********** 2026-04-06 05:33:06.354447 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:06.354467 | orchestrator | 2026-04-06 05:33:06.354486 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:33:06.354504 | orchestrator | Monday 06 April 2026 05:33:01 +0000 (0:00:00.149) 0:25:30.948 ********** 2026-04-06 05:33:06.354522 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:06.354539 | orchestrator | 2026-04-06 05:33:06.354557 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:33:06.354573 | orchestrator | Monday 06 April 2026 05:33:01 +0000 (0:00:00.156) 0:25:31.105 ********** 2026-04-06 05:33:06.354591 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:33:06.354608 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:33:06.354625 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:33:06.354642 | orchestrator | 2026-04-06 05:33:06.354659 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:33:06.354675 | orchestrator | Monday 06 April 2026 05:33:02 +0000 (0:00:01.394) 0:25:32.499 ********** 2026-04-06 05:33:06.354690 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:06.354707 | orchestrator | 2026-04-06 05:33:06.354726 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:33:06.354743 | orchestrator | Monday 06 April 2026 05:33:03 +0000 (0:00:00.281) 0:25:32.781 ********** 2026-04-06 05:33:06.354761 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:33:06.354779 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:33:06.354797 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:33:06.354812 | orchestrator | 2026-04-06 05:33:06.354829 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:33:06.354891 | orchestrator | Monday 06 April 2026 05:33:04 +0000 (0:00:01.881) 0:25:34.662 ********** 2026-04-06 05:33:06.354910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 05:33:06.354928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 05:33:06.354945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 05:33:06.354963 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:06.354980 | orchestrator | 2026-04-06 05:33:06.354996 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:33:06.355012 | orchestrator | Monday 06 April 2026 05:33:05 +0000 (0:00:00.424) 0:25:35.086 ********** 2026-04-06 05:33:06.355031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:33:06.355071 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:33:06.355118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:33:06.355137 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:06.355154 | orchestrator | 2026-04-06 05:33:06.355171 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:33:06.355189 | orchestrator | Monday 06 April 2026 05:33:05 +0000 (0:00:00.613) 0:25:35.699 ********** 2026-04-06 05:33:06.355209 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:06.355232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:06.355251 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:06.355270 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:06.355288 | orchestrator | 2026-04-06 05:33:06.355306 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:33:06.355324 | orchestrator | Monday 06 April 2026 05:33:06 +0000 (0:00:00.171) 0:25:35.871 ********** 2026-04-06 05:33:06.355345 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:33:03.602343', 'end': '2026-04-06 05:33:03.646748', 'delta': '0:00:00.044405', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:33:06.355383 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:33:04.160902', 'end': '2026-04-06 05:33:04.213115', 'delta': '0:00:00.052213', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:33:06.355409 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:33:04.753170', 'end': '2026-04-06 05:33:04.797583', 'delta': '0:00:00.044413', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:33:06.355429 | orchestrator | 2026-04-06 05:33:06.355456 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:33:10.534785 | orchestrator | Monday 06 April 2026 05:33:06 +0000 (0:00:00.194) 0:25:36.065 ********** 2026-04-06 05:33:10.534974 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:10.534994 | orchestrator | 2026-04-06 05:33:10.535007 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:33:10.535019 | orchestrator | Monday 06 April 2026 05:33:06 +0000 (0:00:00.278) 0:25:36.343 ********** 2026-04-06 05:33:10.535030 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:10.535042 | orchestrator | 2026-04-06 05:33:10.535053 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:33:10.535064 | orchestrator | Monday 06 April 2026 05:33:06 +0000 (0:00:00.255) 0:25:36.598 ********** 2026-04-06 05:33:10.535075 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:10.535086 | orchestrator | 2026-04-06 05:33:10.535097 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:33:10.535108 | orchestrator | Monday 06 April 2026 05:33:07 +0000 (0:00:00.152) 0:25:36.751 ********** 2026-04-06 05:33:10.535118 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:33:10.535129 | orchestrator | 2026-04-06 05:33:10.535140 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:33:10.535151 | orchestrator | Monday 06 April 2026 05:33:07 +0000 (0:00:00.945) 0:25:37.697 ********** 2026-04-06 05:33:10.535162 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:10.535172 | orchestrator | 2026-04-06 05:33:10.535183 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:33:10.535194 | orchestrator | Monday 06 April 2026 05:33:08 +0000 (0:00:00.165) 0:25:37.863 ********** 2026-04-06 05:33:10.535205 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:10.535215 | orchestrator | 2026-04-06 05:33:10.535226 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:33:10.535237 | orchestrator | Monday 06 April 2026 05:33:08 +0000 (0:00:00.118) 0:25:37.981 ********** 2026-04-06 05:33:10.535248 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:10.535259 | orchestrator | 2026-04-06 05:33:10.535269 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:33:10.535302 | orchestrator | Monday 06 April 2026 05:33:09 +0000 (0:00:00.992) 0:25:38.974 ********** 2026-04-06 05:33:10.535314 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:10.535325 | orchestrator | 2026-04-06 05:33:10.535336 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:33:10.535347 | orchestrator | Monday 06 April 2026 05:33:09 +0000 (0:00:00.136) 0:25:39.110 ********** 2026-04-06 05:33:10.535358 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:10.535368 | orchestrator | 2026-04-06 05:33:10.535379 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:33:10.535390 | orchestrator | Monday 06 April 2026 05:33:09 +0000 (0:00:00.135) 0:25:39.246 ********** 2026-04-06 05:33:10.535401 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:10.535412 | orchestrator | 2026-04-06 05:33:10.535423 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:33:10.535434 | orchestrator | Monday 06 April 2026 05:33:09 +0000 (0:00:00.175) 0:25:39.421 ********** 2026-04-06 05:33:10.535445 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:10.535455 | orchestrator | 2026-04-06 05:33:10.535466 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:33:10.535477 | orchestrator | Monday 06 April 2026 05:33:09 +0000 (0:00:00.140) 0:25:39.562 ********** 2026-04-06 05:33:10.535488 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:10.535499 | orchestrator | 2026-04-06 05:33:10.535510 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:33:10.535521 | orchestrator | Monday 06 April 2026 05:33:10 +0000 (0:00:00.174) 0:25:39.736 ********** 2026-04-06 05:33:10.535532 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:10.535543 | orchestrator | 2026-04-06 05:33:10.535553 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:33:10.535565 | orchestrator | Monday 06 April 2026 05:33:10 +0000 (0:00:00.118) 0:25:39.855 ********** 2026-04-06 05:33:10.535575 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:10.535586 | orchestrator | 2026-04-06 05:33:10.535597 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:33:10.535608 | orchestrator | Monday 06 April 2026 05:33:10 +0000 (0:00:00.183) 0:25:40.038 ********** 2026-04-06 05:33:10.535621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:33:10.535651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'uuids': ['568ee26d-bc52-45e1-a610-bd1b65a33bb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS']}})  2026-04-06 05:33:10.535685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71f71275', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:33:10.535707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33']}})  2026-04-06 05:33:10.535720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:33:10.535731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:33:10.535744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:33:10.535756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:33:10.535767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3', 'dm-uuid-CRYPT-LUKS2-9b11f78520334917a26820c7a917e496-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:33:10.535792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:33:10.858009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'uuids': ['9b11f785-2033-4917-a268-20c7a917e496'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3']}})  2026-04-06 05:33:10.858184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c']}})  2026-04-06 05:33:10.858206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:33:10.858250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d494db8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:33:10.858295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:33:10.858309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:33:10.858321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS', 'dm-uuid-CRYPT-LUKS2-568ee26dbc5245e1a610bd1b65a33bb1-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:33:10.858335 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:10.858347 | orchestrator | 2026-04-06 05:33:10.858359 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:33:10.858371 | orchestrator | Monday 06 April 2026 05:33:10 +0000 (0:00:00.401) 0:25:40.440 ********** 2026-04-06 05:33:10.858383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.858396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c', 'dm-uuid-LVM-bPoYmFvg2GavrOdhBiQRDEx8f4M6ftpRd0WF3SgLoZI9250ovpvj600rDtqy23dS'], 'uuids': ['568ee26d-bc52-45e1-a610-bd1b65a33bb1'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.858414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103', 'scsi-SQEMU_QEMU_HARDDISK_71f71275-aa74-4331-91d6-c9a393376103'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '71f71275', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.858441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-KIe40k-k1Qf-BSLn-gKBM-IKSP-hovG-JLrIYd', 'scsi-0QEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527', 'scsi-SQEMU_QEMU_HARDDISK_5872ea60-fe11-4979-bb27-b05f1cf0a527'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.979008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.979123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.979150 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.979173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.979204 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3', 'dm-uuid-CRYPT-LUKS2-9b11f78520334917a26820c7a917e496-leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.979238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.979268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44d7a625--0d29--5597--9a0c--b91ce06f2e33-osd--block--44d7a625--0d29--5597--9a0c--b91ce06f2e33', 'dm-uuid-LVM-9nFw926dfpKXupvgijedzJHToRNmcQ5JleWHVnoic4cgBgjJKwf9UMEMV2wXFYs3'], 'uuids': ['9b11f785-2033-4917-a268-20c7a917e496'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5872ea60', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['leWHVn-oic4-cgBg-jJKw-f9UM-EMV2-wXFYs3']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.979281 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-oc9r6Q-FBfB-APQ9-Ef3d-Gduy-n2RE-MAdmSJ', 'scsi-0QEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c', 'scsi-SQEMU_QEMU_HARDDISK_8498d812-c1b1-46ed-92c2-ee1d1b35b15c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8498d812', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33ff4195--b9ae--565c--9501--f62265c8cf2c-osd--block--33ff4195--b9ae--565c--9501--f62265c8cf2c']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.979296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:10.979325 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9d494db8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_9d494db8-bac9-4b6a-86f1-1860f22fc6aa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:20.613355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:20.613476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:20.613491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS', 'dm-uuid-CRYPT-LUKS2-568ee26dbc5245e1a610bd1b65a33bb1-d0WF3S-gLoZ-I925-0ovp-vj60-0rDt-qy23dS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:33:20.613520 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.613532 | orchestrator | 2026-04-06 05:33:20.613542 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:33:20.613551 | orchestrator | Monday 06 April 2026 05:33:11 +0000 (0:00:00.396) 0:25:40.836 ********** 2026-04-06 05:33:20.613560 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:20.613570 | orchestrator | 2026-04-06 05:33:20.613593 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:33:20.613616 | orchestrator | Monday 06 April 2026 05:33:11 +0000 (0:00:00.555) 0:25:41.391 ********** 2026-04-06 05:33:20.613625 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:20.613634 | orchestrator | 2026-04-06 05:33:20.613642 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:33:20.613651 | orchestrator | Monday 06 April 2026 05:33:11 +0000 (0:00:00.140) 0:25:41.532 ********** 2026-04-06 05:33:20.613660 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:20.613668 | orchestrator | 2026-04-06 05:33:20.613677 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:33:20.613686 | orchestrator | Monday 06 April 2026 05:33:12 +0000 (0:00:00.493) 0:25:42.026 ********** 2026-04-06 05:33:20.613694 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.613703 | orchestrator | 2026-04-06 05:33:20.613711 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:33:20.613720 | orchestrator | Monday 06 April 2026 05:33:12 +0000 (0:00:00.448) 0:25:42.475 ********** 2026-04-06 05:33:20.613729 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.613737 | orchestrator | 2026-04-06 05:33:20.613746 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:33:20.613755 | orchestrator | Monday 06 April 2026 05:33:13 +0000 (0:00:00.269) 0:25:42.744 ********** 2026-04-06 05:33:20.613763 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.613772 | orchestrator | 2026-04-06 05:33:20.613781 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:33:20.613790 | orchestrator | Monday 06 April 2026 05:33:13 +0000 (0:00:00.154) 0:25:42.899 ********** 2026-04-06 05:33:20.613799 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-06 05:33:20.613808 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-06 05:33:20.613816 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-06 05:33:20.613863 | orchestrator | 2026-04-06 05:33:20.613872 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:33:20.613881 | orchestrator | Monday 06 April 2026 05:33:13 +0000 (0:00:00.684) 0:25:43.584 ********** 2026-04-06 05:33:20.613890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-06 05:33:20.613898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-06 05:33:20.613907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-06 05:33:20.613916 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.613924 | orchestrator | 2026-04-06 05:33:20.613933 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:33:20.613942 | orchestrator | Monday 06 April 2026 05:33:14 +0000 (0:00:00.161) 0:25:43.746 ********** 2026-04-06 05:33:20.613967 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-06 05:33:20.613977 | orchestrator | 2026-04-06 05:33:20.613987 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:33:20.613998 | orchestrator | Monday 06 April 2026 05:33:14 +0000 (0:00:00.219) 0:25:43.965 ********** 2026-04-06 05:33:20.614006 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.614065 | orchestrator | 2026-04-06 05:33:20.614075 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:33:20.614084 | orchestrator | Monday 06 April 2026 05:33:14 +0000 (0:00:00.163) 0:25:44.129 ********** 2026-04-06 05:33:20.614101 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.614109 | orchestrator | 2026-04-06 05:33:20.614118 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:33:20.614136 | orchestrator | Monday 06 April 2026 05:33:14 +0000 (0:00:00.177) 0:25:44.307 ********** 2026-04-06 05:33:20.614145 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.614153 | orchestrator | 2026-04-06 05:33:20.614162 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:33:20.614171 | orchestrator | Monday 06 April 2026 05:33:14 +0000 (0:00:00.135) 0:25:44.442 ********** 2026-04-06 05:33:20.614180 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:20.614188 | orchestrator | 2026-04-06 05:33:20.614197 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:33:20.614206 | orchestrator | Monday 06 April 2026 05:33:15 +0000 (0:00:00.277) 0:25:44.719 ********** 2026-04-06 05:33:20.614214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:33:20.614223 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:33:20.614232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:33:20.614240 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.614249 | orchestrator | 2026-04-06 05:33:20.614258 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:33:20.614266 | orchestrator | Monday 06 April 2026 05:33:15 +0000 (0:00:00.802) 0:25:45.522 ********** 2026-04-06 05:33:20.614275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:33:20.614284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:33:20.614292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:33:20.614301 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.614309 | orchestrator | 2026-04-06 05:33:20.614318 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:33:20.614327 | orchestrator | Monday 06 April 2026 05:33:16 +0000 (0:00:00.742) 0:25:46.264 ********** 2026-04-06 05:33:20.614336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:33:20.614344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:33:20.614353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:33:20.614362 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:20.614371 | orchestrator | 2026-04-06 05:33:20.614379 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:33:20.614388 | orchestrator | Monday 06 April 2026 05:33:17 +0000 (0:00:01.070) 0:25:47.335 ********** 2026-04-06 05:33:20.614402 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:20.614411 | orchestrator | 2026-04-06 05:33:20.614420 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:33:20.614429 | orchestrator | Monday 06 April 2026 05:33:17 +0000 (0:00:00.161) 0:25:47.496 ********** 2026-04-06 05:33:20.614437 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-06 05:33:20.614446 | orchestrator | 2026-04-06 05:33:20.614455 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:33:20.614464 | orchestrator | Monday 06 April 2026 05:33:18 +0000 (0:00:00.362) 0:25:47.859 ********** 2026-04-06 05:33:20.614472 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:33:20.614481 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:33:20.614490 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:33:20.614498 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-06 05:33:20.614507 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:33:20.614516 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:33:20.614530 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:33:20.614539 | orchestrator | 2026-04-06 05:33:20.614547 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:33:20.614556 | orchestrator | Monday 06 April 2026 05:33:18 +0000 (0:00:00.829) 0:25:48.689 ********** 2026-04-06 05:33:20.614565 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:33:20.614573 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:33:20.614582 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:33:20.614591 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-06 05:33:20.614599 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:33:20.614608 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:33:20.614617 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:33:20.614625 | orchestrator | 2026-04-06 05:33:20.614640 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-06 05:33:35.156417 | orchestrator | Monday 06 April 2026 05:33:20 +0000 (0:00:01.639) 0:25:50.328 ********** 2026-04-06 05:33:35.156520 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:33:35.156533 | orchestrator | 2026-04-06 05:33:35.156544 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-06 05:33:35.156553 | orchestrator | Monday 06 April 2026 05:33:21 +0000 (0:00:01.273) 0:25:51.602 ********** 2026-04-06 05:33:35.156563 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:33:35.156573 | orchestrator | 2026-04-06 05:33:35.156582 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-06 05:33:35.156591 | orchestrator | Monday 06 April 2026 05:33:23 +0000 (0:00:01.908) 0:25:53.510 ********** 2026-04-06 05:33:35.156600 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:33:35.156609 | orchestrator | 2026-04-06 05:33:35.156618 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:33:35.156627 | orchestrator | Monday 06 April 2026 05:33:24 +0000 (0:00:01.179) 0:25:54.690 ********** 2026-04-06 05:33:35.156636 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-06 05:33:35.156645 | orchestrator | 2026-04-06 05:33:35.156653 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:33:35.156662 | orchestrator | Monday 06 April 2026 05:33:25 +0000 (0:00:00.200) 0:25:54.891 ********** 2026-04-06 05:33:35.156671 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-06 05:33:35.156680 | orchestrator | 2026-04-06 05:33:35.156688 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:33:35.156697 | orchestrator | Monday 06 April 2026 05:33:25 +0000 (0:00:00.205) 0:25:55.097 ********** 2026-04-06 05:33:35.156706 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.156715 | orchestrator | 2026-04-06 05:33:35.156723 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:33:35.156732 | orchestrator | Monday 06 April 2026 05:33:25 +0000 (0:00:00.416) 0:25:55.513 ********** 2026-04-06 05:33:35.156741 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.156751 | orchestrator | 2026-04-06 05:33:35.156760 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:33:35.156768 | orchestrator | Monday 06 April 2026 05:33:26 +0000 (0:00:00.537) 0:25:56.051 ********** 2026-04-06 05:33:35.156777 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.156826 | orchestrator | 2026-04-06 05:33:35.156836 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:33:35.156868 | orchestrator | Monday 06 April 2026 05:33:26 +0000 (0:00:00.538) 0:25:56.590 ********** 2026-04-06 05:33:35.156908 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.156918 | orchestrator | 2026-04-06 05:33:35.156927 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:33:35.156936 | orchestrator | Monday 06 April 2026 05:33:27 +0000 (0:00:00.516) 0:25:57.107 ********** 2026-04-06 05:33:35.156945 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.156953 | orchestrator | 2026-04-06 05:33:35.156973 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:33:35.156984 | orchestrator | Monday 06 April 2026 05:33:27 +0000 (0:00:00.133) 0:25:57.241 ********** 2026-04-06 05:33:35.156995 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157005 | orchestrator | 2026-04-06 05:33:35.157015 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:33:35.157025 | orchestrator | Monday 06 April 2026 05:33:27 +0000 (0:00:00.131) 0:25:57.372 ********** 2026-04-06 05:33:35.157035 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157045 | orchestrator | 2026-04-06 05:33:35.157055 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:33:35.157065 | orchestrator | Monday 06 April 2026 05:33:27 +0000 (0:00:00.134) 0:25:57.507 ********** 2026-04-06 05:33:35.157075 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.157085 | orchestrator | 2026-04-06 05:33:35.157095 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:33:35.157104 | orchestrator | Monday 06 April 2026 05:33:28 +0000 (0:00:00.550) 0:25:58.057 ********** 2026-04-06 05:33:35.157113 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.157122 | orchestrator | 2026-04-06 05:33:35.157130 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:33:35.157139 | orchestrator | Monday 06 April 2026 05:33:28 +0000 (0:00:00.543) 0:25:58.601 ********** 2026-04-06 05:33:35.157147 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157156 | orchestrator | 2026-04-06 05:33:35.157165 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:33:35.157173 | orchestrator | Monday 06 April 2026 05:33:29 +0000 (0:00:00.137) 0:25:58.739 ********** 2026-04-06 05:33:35.157182 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157191 | orchestrator | 2026-04-06 05:33:35.157199 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:33:35.157208 | orchestrator | Monday 06 April 2026 05:33:29 +0000 (0:00:00.126) 0:25:58.865 ********** 2026-04-06 05:33:35.157216 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.157225 | orchestrator | 2026-04-06 05:33:35.157233 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:33:35.157242 | orchestrator | Monday 06 April 2026 05:33:29 +0000 (0:00:00.206) 0:25:59.071 ********** 2026-04-06 05:33:35.157251 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.157259 | orchestrator | 2026-04-06 05:33:35.157268 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:33:35.157276 | orchestrator | Monday 06 April 2026 05:33:29 +0000 (0:00:00.164) 0:25:59.236 ********** 2026-04-06 05:33:35.157285 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.157293 | orchestrator | 2026-04-06 05:33:35.157318 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:33:35.157327 | orchestrator | Monday 06 April 2026 05:33:29 +0000 (0:00:00.152) 0:25:59.388 ********** 2026-04-06 05:33:35.157336 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157345 | orchestrator | 2026-04-06 05:33:35.157353 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:33:35.157362 | orchestrator | Monday 06 April 2026 05:33:30 +0000 (0:00:00.447) 0:25:59.836 ********** 2026-04-06 05:33:35.157370 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157379 | orchestrator | 2026-04-06 05:33:35.157387 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:33:35.157403 | orchestrator | Monday 06 April 2026 05:33:30 +0000 (0:00:00.151) 0:25:59.987 ********** 2026-04-06 05:33:35.157412 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157420 | orchestrator | 2026-04-06 05:33:35.157429 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:33:35.157438 | orchestrator | Monday 06 April 2026 05:33:30 +0000 (0:00:00.141) 0:26:00.129 ********** 2026-04-06 05:33:35.157446 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.157455 | orchestrator | 2026-04-06 05:33:35.157463 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:33:35.157472 | orchestrator | Monday 06 April 2026 05:33:30 +0000 (0:00:00.163) 0:26:00.292 ********** 2026-04-06 05:33:35.157481 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.157489 | orchestrator | 2026-04-06 05:33:35.157498 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:33:35.157506 | orchestrator | Monday 06 April 2026 05:33:30 +0000 (0:00:00.232) 0:26:00.525 ********** 2026-04-06 05:33:35.157515 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157524 | orchestrator | 2026-04-06 05:33:35.157532 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:33:35.157541 | orchestrator | Monday 06 April 2026 05:33:30 +0000 (0:00:00.154) 0:26:00.679 ********** 2026-04-06 05:33:35.157549 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157558 | orchestrator | 2026-04-06 05:33:35.157567 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:33:35.157575 | orchestrator | Monday 06 April 2026 05:33:31 +0000 (0:00:00.135) 0:26:00.815 ********** 2026-04-06 05:33:35.157584 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157592 | orchestrator | 2026-04-06 05:33:35.157601 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:33:35.157610 | orchestrator | Monday 06 April 2026 05:33:31 +0000 (0:00:00.124) 0:26:00.939 ********** 2026-04-06 05:33:35.157618 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157627 | orchestrator | 2026-04-06 05:33:35.157635 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:33:35.157644 | orchestrator | Monday 06 April 2026 05:33:31 +0000 (0:00:00.134) 0:26:01.074 ********** 2026-04-06 05:33:35.157653 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157661 | orchestrator | 2026-04-06 05:33:35.157670 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:33:35.157679 | orchestrator | Monday 06 April 2026 05:33:31 +0000 (0:00:00.129) 0:26:01.204 ********** 2026-04-06 05:33:35.157687 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157696 | orchestrator | 2026-04-06 05:33:35.157704 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:33:35.157718 | orchestrator | Monday 06 April 2026 05:33:31 +0000 (0:00:00.138) 0:26:01.343 ********** 2026-04-06 05:33:35.157727 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157736 | orchestrator | 2026-04-06 05:33:35.157744 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:33:35.157753 | orchestrator | Monday 06 April 2026 05:33:31 +0000 (0:00:00.135) 0:26:01.478 ********** 2026-04-06 05:33:35.157762 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157771 | orchestrator | 2026-04-06 05:33:35.157779 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:33:35.157840 | orchestrator | Monday 06 April 2026 05:33:32 +0000 (0:00:00.451) 0:26:01.930 ********** 2026-04-06 05:33:35.157851 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157859 | orchestrator | 2026-04-06 05:33:35.157868 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:33:35.157876 | orchestrator | Monday 06 April 2026 05:33:32 +0000 (0:00:00.143) 0:26:02.075 ********** 2026-04-06 05:33:35.157885 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157894 | orchestrator | 2026-04-06 05:33:35.157902 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:33:35.157917 | orchestrator | Monday 06 April 2026 05:33:32 +0000 (0:00:00.124) 0:26:02.199 ********** 2026-04-06 05:33:35.157925 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157934 | orchestrator | 2026-04-06 05:33:35.157942 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:33:35.157951 | orchestrator | Monday 06 April 2026 05:33:32 +0000 (0:00:00.121) 0:26:02.321 ********** 2026-04-06 05:33:35.157960 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:35.157968 | orchestrator | 2026-04-06 05:33:35.157977 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:33:35.157986 | orchestrator | Monday 06 April 2026 05:33:32 +0000 (0:00:00.205) 0:26:02.527 ********** 2026-04-06 05:33:35.157994 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.158003 | orchestrator | 2026-04-06 05:33:35.158011 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:33:35.158072 | orchestrator | Monday 06 April 2026 05:33:33 +0000 (0:00:00.947) 0:26:03.475 ********** 2026-04-06 05:33:35.158081 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:35.158090 | orchestrator | 2026-04-06 05:33:35.158099 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:33:35.158107 | orchestrator | Monday 06 April 2026 05:33:34 +0000 (0:00:01.172) 0:26:04.647 ********** 2026-04-06 05:33:35.158116 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-06 05:33:35.158125 | orchestrator | 2026-04-06 05:33:35.158134 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:33:35.158149 | orchestrator | Monday 06 April 2026 05:33:35 +0000 (0:00:00.214) 0:26:04.862 ********** 2026-04-06 05:33:51.143406 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.143524 | orchestrator | 2026-04-06 05:33:51.143544 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:33:51.143558 | orchestrator | Monday 06 April 2026 05:33:35 +0000 (0:00:00.147) 0:26:05.010 ********** 2026-04-06 05:33:51.143571 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.143583 | orchestrator | 2026-04-06 05:33:51.143595 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:33:51.143607 | orchestrator | Monday 06 April 2026 05:33:35 +0000 (0:00:00.145) 0:26:05.156 ********** 2026-04-06 05:33:51.143618 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:33:51.143630 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:33:51.143642 | orchestrator | 2026-04-06 05:33:51.143654 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:33:51.143666 | orchestrator | Monday 06 April 2026 05:33:36 +0000 (0:00:00.798) 0:26:05.954 ********** 2026-04-06 05:33:51.143677 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:51.143696 | orchestrator | 2026-04-06 05:33:51.143715 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:33:51.143735 | orchestrator | Monday 06 April 2026 05:33:37 +0000 (0:00:00.809) 0:26:06.764 ********** 2026-04-06 05:33:51.143813 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.143834 | orchestrator | 2026-04-06 05:33:51.143853 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:33:51.143872 | orchestrator | Monday 06 April 2026 05:33:37 +0000 (0:00:00.161) 0:26:06.926 ********** 2026-04-06 05:33:51.143889 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.143908 | orchestrator | 2026-04-06 05:33:51.143921 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:33:51.143933 | orchestrator | Monday 06 April 2026 05:33:37 +0000 (0:00:00.149) 0:26:07.076 ********** 2026-04-06 05:33:51.143944 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.143955 | orchestrator | 2026-04-06 05:33:51.143966 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:33:51.144001 | orchestrator | Monday 06 April 2026 05:33:37 +0000 (0:00:00.146) 0:26:07.223 ********** 2026-04-06 05:33:51.144012 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-06 05:33:51.144024 | orchestrator | 2026-04-06 05:33:51.144035 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:33:51.144046 | orchestrator | Monday 06 April 2026 05:33:37 +0000 (0:00:00.229) 0:26:07.452 ********** 2026-04-06 05:33:51.144057 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:51.144068 | orchestrator | 2026-04-06 05:33:51.144079 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:33:51.144090 | orchestrator | Monday 06 April 2026 05:33:38 +0000 (0:00:00.771) 0:26:08.224 ********** 2026-04-06 05:33:51.144101 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:33:51.144112 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:33:51.144138 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:33:51.144150 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144161 | orchestrator | 2026-04-06 05:33:51.144172 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:33:51.144183 | orchestrator | Monday 06 April 2026 05:33:38 +0000 (0:00:00.148) 0:26:08.372 ********** 2026-04-06 05:33:51.144193 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144204 | orchestrator | 2026-04-06 05:33:51.144215 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:33:51.144226 | orchestrator | Monday 06 April 2026 05:33:38 +0000 (0:00:00.160) 0:26:08.533 ********** 2026-04-06 05:33:51.144237 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144247 | orchestrator | 2026-04-06 05:33:51.144258 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:33:51.144269 | orchestrator | Monday 06 April 2026 05:33:39 +0000 (0:00:00.209) 0:26:08.743 ********** 2026-04-06 05:33:51.144280 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144291 | orchestrator | 2026-04-06 05:33:51.144301 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:33:51.144312 | orchestrator | Monday 06 April 2026 05:33:39 +0000 (0:00:00.155) 0:26:08.898 ********** 2026-04-06 05:33:51.144323 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144334 | orchestrator | 2026-04-06 05:33:51.144345 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:33:51.144355 | orchestrator | Monday 06 April 2026 05:33:39 +0000 (0:00:00.166) 0:26:09.065 ********** 2026-04-06 05:33:51.144366 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144377 | orchestrator | 2026-04-06 05:33:51.144388 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:33:51.144399 | orchestrator | Monday 06 April 2026 05:33:39 +0000 (0:00:00.158) 0:26:09.223 ********** 2026-04-06 05:33:51.144410 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:51.144421 | orchestrator | 2026-04-06 05:33:51.144432 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:33:51.144443 | orchestrator | Monday 06 April 2026 05:33:41 +0000 (0:00:01.956) 0:26:11.180 ********** 2026-04-06 05:33:51.144454 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:51.144464 | orchestrator | 2026-04-06 05:33:51.144475 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:33:51.144486 | orchestrator | Monday 06 April 2026 05:33:41 +0000 (0:00:00.142) 0:26:11.323 ********** 2026-04-06 05:33:51.144497 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-06 05:33:51.144508 | orchestrator | 2026-04-06 05:33:51.144519 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:33:51.144549 | orchestrator | Monday 06 April 2026 05:33:41 +0000 (0:00:00.228) 0:26:11.551 ********** 2026-04-06 05:33:51.144560 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144583 | orchestrator | 2026-04-06 05:33:51.144605 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:33:51.144631 | orchestrator | Monday 06 April 2026 05:33:41 +0000 (0:00:00.161) 0:26:11.713 ********** 2026-04-06 05:33:51.144651 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144668 | orchestrator | 2026-04-06 05:33:51.144685 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:33:51.144702 | orchestrator | Monday 06 April 2026 05:33:42 +0000 (0:00:00.167) 0:26:11.881 ********** 2026-04-06 05:33:51.144722 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144741 | orchestrator | 2026-04-06 05:33:51.144805 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:33:51.144817 | orchestrator | Monday 06 April 2026 05:33:42 +0000 (0:00:00.152) 0:26:12.033 ********** 2026-04-06 05:33:51.144828 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144839 | orchestrator | 2026-04-06 05:33:51.144850 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:33:51.144861 | orchestrator | Monday 06 April 2026 05:33:42 +0000 (0:00:00.143) 0:26:12.177 ********** 2026-04-06 05:33:51.144871 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144882 | orchestrator | 2026-04-06 05:33:51.144893 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:33:51.144904 | orchestrator | Monday 06 April 2026 05:33:42 +0000 (0:00:00.158) 0:26:12.336 ********** 2026-04-06 05:33:51.144915 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144925 | orchestrator | 2026-04-06 05:33:51.144936 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:33:51.144947 | orchestrator | Monday 06 April 2026 05:33:42 +0000 (0:00:00.148) 0:26:12.484 ********** 2026-04-06 05:33:51.144958 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.144968 | orchestrator | 2026-04-06 05:33:51.144979 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:33:51.144990 | orchestrator | Monday 06 April 2026 05:33:42 +0000 (0:00:00.148) 0:26:12.633 ********** 2026-04-06 05:33:51.145001 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:33:51.145011 | orchestrator | 2026-04-06 05:33:51.145022 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:33:51.145033 | orchestrator | Monday 06 April 2026 05:33:43 +0000 (0:00:00.160) 0:26:12.794 ********** 2026-04-06 05:33:51.145044 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:33:51.145054 | orchestrator | 2026-04-06 05:33:51.145065 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:33:51.145076 | orchestrator | Monday 06 April 2026 05:33:43 +0000 (0:00:00.222) 0:26:13.017 ********** 2026-04-06 05:33:51.145087 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-06 05:33:51.145098 | orchestrator | 2026-04-06 05:33:51.145109 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:33:51.145119 | orchestrator | Monday 06 April 2026 05:33:43 +0000 (0:00:00.500) 0:26:13.517 ********** 2026-04-06 05:33:51.145130 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-06 05:33:51.145141 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-06 05:33:51.145160 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-06 05:33:51.145171 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-06 05:33:51.145181 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-06 05:33:51.145192 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-06 05:33:51.145203 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-06 05:33:51.145213 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:33:51.145224 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:33:51.145235 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:33:51.145255 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:33:51.145266 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:33:51.145277 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:33:51.145288 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:33:51.145298 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-06 05:33:51.145309 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-06 05:33:51.145320 | orchestrator | 2026-04-06 05:33:51.145331 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:33:51.145342 | orchestrator | Monday 06 April 2026 05:33:49 +0000 (0:00:05.624) 0:26:19.142 ********** 2026-04-06 05:33:51.145352 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-06 05:33:51.145363 | orchestrator | 2026-04-06 05:33:51.145374 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-06 05:33:51.145385 | orchestrator | Monday 06 April 2026 05:33:49 +0000 (0:00:00.210) 0:26:19.353 ********** 2026-04-06 05:33:51.145396 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:33:51.145408 | orchestrator | 2026-04-06 05:33:51.145419 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-06 05:33:51.145430 | orchestrator | Monday 06 April 2026 05:33:50 +0000 (0:00:00.503) 0:26:19.856 ********** 2026-04-06 05:33:51.145441 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:33:51.145452 | orchestrator | 2026-04-06 05:33:51.145463 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:33:51.145483 | orchestrator | Monday 06 April 2026 05:33:51 +0000 (0:00:00.994) 0:26:20.851 ********** 2026-04-06 05:34:10.142366 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.142516 | orchestrator | 2026-04-06 05:34:10.142544 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:34:10.142566 | orchestrator | Monday 06 April 2026 05:33:51 +0000 (0:00:00.147) 0:26:20.998 ********** 2026-04-06 05:34:10.142587 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.142606 | orchestrator | 2026-04-06 05:34:10.142627 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:34:10.142647 | orchestrator | Monday 06 April 2026 05:33:51 +0000 (0:00:00.138) 0:26:21.137 ********** 2026-04-06 05:34:10.142667 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.142686 | orchestrator | 2026-04-06 05:34:10.142736 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:34:10.142757 | orchestrator | Monday 06 April 2026 05:33:51 +0000 (0:00:00.132) 0:26:21.269 ********** 2026-04-06 05:34:10.142777 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.142796 | orchestrator | 2026-04-06 05:34:10.142813 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:34:10.142831 | orchestrator | Monday 06 April 2026 05:33:51 +0000 (0:00:00.144) 0:26:21.413 ********** 2026-04-06 05:34:10.142849 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.142866 | orchestrator | 2026-04-06 05:34:10.142884 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:34:10.142905 | orchestrator | Monday 06 April 2026 05:33:51 +0000 (0:00:00.139) 0:26:21.553 ********** 2026-04-06 05:34:10.142930 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.142954 | orchestrator | 2026-04-06 05:34:10.142977 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:34:10.143002 | orchestrator | Monday 06 April 2026 05:33:51 +0000 (0:00:00.136) 0:26:21.689 ********** 2026-04-06 05:34:10.143027 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.143054 | orchestrator | 2026-04-06 05:34:10.143077 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:34:10.143134 | orchestrator | Monday 06 April 2026 05:33:52 +0000 (0:00:00.427) 0:26:22.117 ********** 2026-04-06 05:34:10.143154 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.143174 | orchestrator | 2026-04-06 05:34:10.143196 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:34:10.143217 | orchestrator | Monday 06 April 2026 05:33:52 +0000 (0:00:00.151) 0:26:22.268 ********** 2026-04-06 05:34:10.143238 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.143259 | orchestrator | 2026-04-06 05:34:10.143279 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:34:10.143299 | orchestrator | Monday 06 April 2026 05:33:52 +0000 (0:00:00.152) 0:26:22.420 ********** 2026-04-06 05:34:10.143319 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.143339 | orchestrator | 2026-04-06 05:34:10.143359 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:34:10.143377 | orchestrator | Monday 06 April 2026 05:33:52 +0000 (0:00:00.140) 0:26:22.561 ********** 2026-04-06 05:34:10.143397 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.143417 | orchestrator | 2026-04-06 05:34:10.143455 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:34:10.143475 | orchestrator | Monday 06 April 2026 05:33:52 +0000 (0:00:00.153) 0:26:22.715 ********** 2026-04-06 05:34:10.143495 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:34:10.143514 | orchestrator | 2026-04-06 05:34:10.143534 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:34:10.143554 | orchestrator | Monday 06 April 2026 05:33:56 +0000 (0:00:03.489) 0:26:26.204 ********** 2026-04-06 05:34:10.143575 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:34:10.143596 | orchestrator | 2026-04-06 05:34:10.143617 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:34:10.143637 | orchestrator | Monday 06 April 2026 05:33:56 +0000 (0:00:00.192) 0:26:26.397 ********** 2026-04-06 05:34:10.143660 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-06 05:34:10.143685 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-06 05:34:10.143732 | orchestrator | 2026-04-06 05:34:10.143755 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:34:10.143775 | orchestrator | Monday 06 April 2026 05:34:00 +0000 (0:00:03.886) 0:26:30.284 ********** 2026-04-06 05:34:10.143795 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.143815 | orchestrator | 2026-04-06 05:34:10.143835 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:34:10.143855 | orchestrator | Monday 06 April 2026 05:34:00 +0000 (0:00:00.144) 0:26:30.429 ********** 2026-04-06 05:34:10.143875 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.143895 | orchestrator | 2026-04-06 05:34:10.143915 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:34:10.143961 | orchestrator | Monday 06 April 2026 05:34:00 +0000 (0:00:00.133) 0:26:30.562 ********** 2026-04-06 05:34:10.143982 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.144002 | orchestrator | 2026-04-06 05:34:10.144023 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:34:10.144059 | orchestrator | Monday 06 April 2026 05:34:01 +0000 (0:00:00.169) 0:26:30.732 ********** 2026-04-06 05:34:10.144079 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.144099 | orchestrator | 2026-04-06 05:34:10.144118 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:34:10.144138 | orchestrator | Monday 06 April 2026 05:34:01 +0000 (0:00:00.153) 0:26:30.886 ********** 2026-04-06 05:34:10.144158 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.144177 | orchestrator | 2026-04-06 05:34:10.144197 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:34:10.144218 | orchestrator | Monday 06 April 2026 05:34:01 +0000 (0:00:00.161) 0:26:31.047 ********** 2026-04-06 05:34:10.144238 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:34:10.144260 | orchestrator | 2026-04-06 05:34:10.144280 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:34:10.144300 | orchestrator | Monday 06 April 2026 05:34:01 +0000 (0:00:00.597) 0:26:31.645 ********** 2026-04-06 05:34:10.144320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:34:10.144340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:34:10.144359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:34:10.144380 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.144399 | orchestrator | 2026-04-06 05:34:10.144419 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:34:10.144440 | orchestrator | Monday 06 April 2026 05:34:02 +0000 (0:00:00.424) 0:26:32.069 ********** 2026-04-06 05:34:10.144460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:34:10.144480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:34:10.144500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:34:10.144520 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.144539 | orchestrator | 2026-04-06 05:34:10.144557 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:34:10.144574 | orchestrator | Monday 06 April 2026 05:34:02 +0000 (0:00:00.467) 0:26:32.536 ********** 2026-04-06 05:34:10.144592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-06 05:34:10.144609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-06 05:34:10.144626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-06 05:34:10.144643 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.144661 | orchestrator | 2026-04-06 05:34:10.144678 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:34:10.144695 | orchestrator | Monday 06 April 2026 05:34:03 +0000 (0:00:00.483) 0:26:33.019 ********** 2026-04-06 05:34:10.144739 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:34:10.144758 | orchestrator | 2026-04-06 05:34:10.144776 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:34:10.144805 | orchestrator | Monday 06 April 2026 05:34:03 +0000 (0:00:00.176) 0:26:33.196 ********** 2026-04-06 05:34:10.144823 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-06 05:34:10.144841 | orchestrator | 2026-04-06 05:34:10.144858 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:34:10.144878 | orchestrator | Monday 06 April 2026 05:34:03 +0000 (0:00:00.436) 0:26:33.632 ********** 2026-04-06 05:34:10.144895 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:34:10.144913 | orchestrator | 2026-04-06 05:34:10.144930 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-06 05:34:10.144948 | orchestrator | Monday 06 April 2026 05:34:04 +0000 (0:00:00.826) 0:26:34.459 ********** 2026-04-06 05:34:10.144965 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-04-06 05:34:10.144983 | orchestrator | 2026-04-06 05:34:10.145002 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-06 05:34:10.145036 | orchestrator | Monday 06 April 2026 05:34:05 +0000 (0:00:00.588) 0:26:35.047 ********** 2026-04-06 05:34:10.145054 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:34:10.145072 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 05:34:10.145090 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:34:10.145107 | orchestrator | 2026-04-06 05:34:10.145124 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:34:10.145142 | orchestrator | Monday 06 April 2026 05:34:07 +0000 (0:00:02.232) 0:26:37.279 ********** 2026-04-06 05:34:10.145159 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-06 05:34:10.145175 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-06 05:34:10.145193 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:34:10.145211 | orchestrator | 2026-04-06 05:34:10.145230 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-06 05:34:10.145248 | orchestrator | Monday 06 April 2026 05:34:08 +0000 (0:00:00.945) 0:26:38.225 ********** 2026-04-06 05:34:10.145267 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:34:10.145287 | orchestrator | 2026-04-06 05:34:10.145363 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-06 05:34:10.145383 | orchestrator | Monday 06 April 2026 05:34:08 +0000 (0:00:00.489) 0:26:38.715 ********** 2026-04-06 05:34:10.145401 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-04-06 05:34:10.145419 | orchestrator | 2026-04-06 05:34:10.145436 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-06 05:34:10.145454 | orchestrator | Monday 06 April 2026 05:34:09 +0000 (0:00:00.558) 0:26:39.274 ********** 2026-04-06 05:34:10.145490 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:35:02.559065 | orchestrator | 2026-04-06 05:35:02.559187 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-06 05:35:02.559205 | orchestrator | Monday 06 April 2026 05:34:10 +0000 (0:00:00.671) 0:26:39.945 ********** 2026-04-06 05:35:02.559217 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:35:02.559229 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-06 05:35:02.559241 | orchestrator | 2026-04-06 05:35:02.559253 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-06 05:35:02.559264 | orchestrator | Monday 06 April 2026 05:34:14 +0000 (0:00:04.220) 0:26:44.166 ********** 2026-04-06 05:35:02.559275 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:35:02.559287 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:35:02.559298 | orchestrator | 2026-04-06 05:35:02.559309 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:35:02.559320 | orchestrator | Monday 06 April 2026 05:34:16 +0000 (0:00:02.083) 0:26:46.250 ********** 2026-04-06 05:35:02.559331 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-06 05:35:02.559343 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:35:02.559355 | orchestrator | 2026-04-06 05:35:02.559366 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-06 05:35:02.559377 | orchestrator | Monday 06 April 2026 05:34:17 +0000 (0:00:01.016) 0:26:47.266 ********** 2026-04-06 05:35:02.559388 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-06 05:35:02.559399 | orchestrator | 2026-04-06 05:35:02.559410 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-06 05:35:02.559421 | orchestrator | Monday 06 April 2026 05:34:18 +0000 (0:00:00.633) 0:26:47.900 ********** 2026-04-06 05:35:02.559432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:35:02.559467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:35:02.559480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:35:02.559491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:35:02.559502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:35:02.559513 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:35:02.559524 | orchestrator | 2026-04-06 05:35:02.559550 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-06 05:35:02.559561 | orchestrator | Monday 06 April 2026 05:34:19 +0000 (0:00:00.972) 0:26:48.872 ********** 2026-04-06 05:35:02.559572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:35:02.559583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:35:02.559594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:35:02.559638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:35:02.559653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:35:02.559666 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:35:02.559678 | orchestrator | 2026-04-06 05:35:02.559692 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-06 05:35:02.559705 | orchestrator | Monday 06 April 2026 05:34:20 +0000 (0:00:00.993) 0:26:49.866 ********** 2026-04-06 05:35:02.559718 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:35:02.559732 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:35:02.559745 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:35:02.559758 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:35:02.559773 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:35:02.559785 | orchestrator | 2026-04-06 05:35:02.559798 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-06 05:35:02.559827 | orchestrator | Monday 06 April 2026 05:34:51 +0000 (0:00:31.462) 0:27:21.329 ********** 2026-04-06 05:35:02.559839 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:35:02.559850 | orchestrator | 2026-04-06 05:35:02.559861 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-06 05:35:02.559872 | orchestrator | Monday 06 April 2026 05:34:51 +0000 (0:00:00.118) 0:27:21.448 ********** 2026-04-06 05:35:02.559883 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:35:02.559894 | orchestrator | 2026-04-06 05:35:02.559905 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-06 05:35:02.559916 | orchestrator | Monday 06 April 2026 05:34:52 +0000 (0:00:00.439) 0:27:21.887 ********** 2026-04-06 05:35:02.559927 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-04-06 05:35:02.559946 | orchestrator | 2026-04-06 05:35:02.559957 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-06 05:35:02.559968 | orchestrator | Monday 06 April 2026 05:34:52 +0000 (0:00:00.600) 0:27:22.487 ********** 2026-04-06 05:35:02.559978 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-04-06 05:35:02.559990 | orchestrator | 2026-04-06 05:35:02.560001 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-06 05:35:02.560012 | orchestrator | Monday 06 April 2026 05:34:53 +0000 (0:00:00.583) 0:27:23.071 ********** 2026-04-06 05:35:02.560023 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:35:02.560034 | orchestrator | 2026-04-06 05:35:02.560045 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-06 05:35:02.560055 | orchestrator | Monday 06 April 2026 05:34:54 +0000 (0:00:01.104) 0:27:24.175 ********** 2026-04-06 05:35:02.560066 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:35:02.560077 | orchestrator | 2026-04-06 05:35:02.560095 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-06 05:35:02.560115 | orchestrator | Monday 06 April 2026 05:34:55 +0000 (0:00:00.913) 0:27:25.089 ********** 2026-04-06 05:35:02.560135 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:35:02.560154 | orchestrator | 2026-04-06 05:35:02.560169 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-06 05:35:02.560180 | orchestrator | Monday 06 April 2026 05:34:56 +0000 (0:00:01.216) 0:27:26.305 ********** 2026-04-06 05:35:02.560191 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-06 05:35:02.560202 | orchestrator | 2026-04-06 05:35:02.560213 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-06 05:35:02.560224 | orchestrator | 2026-04-06 05:35:02.560234 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:35:02.560245 | orchestrator | Monday 06 April 2026 05:34:59 +0000 (0:00:02.445) 0:27:28.751 ********** 2026-04-06 05:35:02.560256 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-06 05:35:02.560267 | orchestrator | 2026-04-06 05:35:02.560278 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:35:02.560294 | orchestrator | Monday 06 April 2026 05:34:59 +0000 (0:00:00.246) 0:27:28.997 ********** 2026-04-06 05:35:02.560306 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:02.560317 | orchestrator | 2026-04-06 05:35:02.560328 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:35:02.560339 | orchestrator | Monday 06 April 2026 05:35:00 +0000 (0:00:00.765) 0:27:29.763 ********** 2026-04-06 05:35:02.560349 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:02.560360 | orchestrator | 2026-04-06 05:35:02.560371 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:35:02.560382 | orchestrator | Monday 06 April 2026 05:35:00 +0000 (0:00:00.153) 0:27:29.916 ********** 2026-04-06 05:35:02.560393 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:02.560404 | orchestrator | 2026-04-06 05:35:02.560415 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:35:02.560426 | orchestrator | Monday 06 April 2026 05:35:00 +0000 (0:00:00.496) 0:27:30.412 ********** 2026-04-06 05:35:02.560437 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:02.560448 | orchestrator | 2026-04-06 05:35:02.560459 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:35:02.560469 | orchestrator | Monday 06 April 2026 05:35:00 +0000 (0:00:00.154) 0:27:30.567 ********** 2026-04-06 05:35:02.560480 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:02.560491 | orchestrator | 2026-04-06 05:35:02.560502 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:35:02.560513 | orchestrator | Monday 06 April 2026 05:35:01 +0000 (0:00:00.162) 0:27:30.730 ********** 2026-04-06 05:35:02.560531 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:02.560542 | orchestrator | 2026-04-06 05:35:02.560553 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:35:02.560564 | orchestrator | Monday 06 April 2026 05:35:01 +0000 (0:00:00.165) 0:27:30.896 ********** 2026-04-06 05:35:02.560575 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:02.560586 | orchestrator | 2026-04-06 05:35:02.560597 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:35:02.560653 | orchestrator | Monday 06 April 2026 05:35:01 +0000 (0:00:00.153) 0:27:31.049 ********** 2026-04-06 05:35:02.560665 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:02.560676 | orchestrator | 2026-04-06 05:35:02.560687 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:35:02.560698 | orchestrator | Monday 06 April 2026 05:35:01 +0000 (0:00:00.146) 0:27:31.196 ********** 2026-04-06 05:35:02.560709 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:35:02.560719 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:35:02.560730 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:35:02.560741 | orchestrator | 2026-04-06 05:35:02.560752 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:35:02.560770 | orchestrator | Monday 06 April 2026 05:35:02 +0000 (0:00:01.071) 0:27:32.267 ********** 2026-04-06 05:35:10.899487 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:10.899731 | orchestrator | 2026-04-06 05:35:10.899753 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:35:10.899768 | orchestrator | Monday 06 April 2026 05:35:02 +0000 (0:00:00.253) 0:27:32.521 ********** 2026-04-06 05:35:10.899779 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:35:10.899792 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:35:10.899803 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:35:10.899814 | orchestrator | 2026-04-06 05:35:10.899826 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:35:10.899837 | orchestrator | Monday 06 April 2026 05:35:04 +0000 (0:00:02.156) 0:27:34.677 ********** 2026-04-06 05:35:10.899849 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-06 05:35:10.899860 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-06 05:35:10.899871 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-06 05:35:10.899883 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:10.899894 | orchestrator | 2026-04-06 05:35:10.899905 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:35:10.899916 | orchestrator | Monday 06 April 2026 05:35:05 +0000 (0:00:00.919) 0:27:35.596 ********** 2026-04-06 05:35:10.899929 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:35:10.899944 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:35:10.899956 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:35:10.899967 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:10.899979 | orchestrator | 2026-04-06 05:35:10.899990 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:35:10.900032 | orchestrator | Monday 06 April 2026 05:35:06 +0000 (0:00:01.009) 0:27:36.605 ********** 2026-04-06 05:35:10.900065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:10.900082 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:10.900096 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:10.900110 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:10.900122 | orchestrator | 2026-04-06 05:35:10.900136 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:35:10.900150 | orchestrator | Monday 06 April 2026 05:35:07 +0000 (0:00:00.530) 0:27:37.136 ********** 2026-04-06 05:35:10.900186 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:35:03.314450', 'end': '2026-04-06 05:35:03.360864', 'delta': '0:00:00.046414', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:35:10.900203 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:35:04.227051', 'end': '2026-04-06 05:35:04.276391', 'delta': '0:00:00.049340', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:35:10.900215 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:35:04.775126', 'end': '2026-04-06 05:35:04.819819', 'delta': '0:00:00.044693', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:35:10.900236 | orchestrator | 2026-04-06 05:35:10.900247 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:35:10.900258 | orchestrator | Monday 06 April 2026 05:35:07 +0000 (0:00:00.196) 0:27:37.332 ********** 2026-04-06 05:35:10.900269 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:10.900280 | orchestrator | 2026-04-06 05:35:10.900291 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:35:10.900302 | orchestrator | Monday 06 April 2026 05:35:07 +0000 (0:00:00.266) 0:27:37.598 ********** 2026-04-06 05:35:10.900313 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:10.900323 | orchestrator | 2026-04-06 05:35:10.900334 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:35:10.900350 | orchestrator | Monday 06 April 2026 05:35:08 +0000 (0:00:00.272) 0:27:37.871 ********** 2026-04-06 05:35:10.900361 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:10.900372 | orchestrator | 2026-04-06 05:35:10.900383 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:35:10.900393 | orchestrator | Monday 06 April 2026 05:35:08 +0000 (0:00:00.142) 0:27:38.014 ********** 2026-04-06 05:35:10.900404 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:35:10.900415 | orchestrator | 2026-04-06 05:35:10.900425 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:35:10.900436 | orchestrator | Monday 06 April 2026 05:35:09 +0000 (0:00:01.004) 0:27:39.018 ********** 2026-04-06 05:35:10.900447 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:10.900458 | orchestrator | 2026-04-06 05:35:10.900468 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:35:10.900479 | orchestrator | Monday 06 April 2026 05:35:09 +0000 (0:00:00.158) 0:27:39.177 ********** 2026-04-06 05:35:10.900489 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:10.900500 | orchestrator | 2026-04-06 05:35:10.900511 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:35:10.900521 | orchestrator | Monday 06 April 2026 05:35:09 +0000 (0:00:00.129) 0:27:39.307 ********** 2026-04-06 05:35:10.900532 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:10.900543 | orchestrator | 2026-04-06 05:35:10.900553 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:35:10.900564 | orchestrator | Monday 06 April 2026 05:35:09 +0000 (0:00:00.230) 0:27:39.537 ********** 2026-04-06 05:35:10.900575 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:10.900616 | orchestrator | 2026-04-06 05:35:10.900635 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:35:10.900654 | orchestrator | Monday 06 April 2026 05:35:09 +0000 (0:00:00.136) 0:27:39.674 ********** 2026-04-06 05:35:10.900670 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:10.900686 | orchestrator | 2026-04-06 05:35:10.900703 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:35:10.900717 | orchestrator | Monday 06 April 2026 05:35:10 +0000 (0:00:00.134) 0:27:39.809 ********** 2026-04-06 05:35:10.900729 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:10.900739 | orchestrator | 2026-04-06 05:35:10.900750 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:35:10.900761 | orchestrator | Monday 06 April 2026 05:35:10 +0000 (0:00:00.176) 0:27:39.985 ********** 2026-04-06 05:35:10.900772 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:10.900783 | orchestrator | 2026-04-06 05:35:10.900794 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:35:10.900804 | orchestrator | Monday 06 April 2026 05:35:10 +0000 (0:00:00.135) 0:27:40.120 ********** 2026-04-06 05:35:10.900815 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:10.900826 | orchestrator | 2026-04-06 05:35:10.900837 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:35:10.900856 | orchestrator | Monday 06 April 2026 05:35:10 +0000 (0:00:00.490) 0:27:40.611 ********** 2026-04-06 05:35:11.468415 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:11.468543 | orchestrator | 2026-04-06 05:35:11.468559 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:35:11.468573 | orchestrator | Monday 06 April 2026 05:35:11 +0000 (0:00:00.147) 0:27:40.758 ********** 2026-04-06 05:35:11.468663 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:11.468687 | orchestrator | 2026-04-06 05:35:11.468705 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:35:11.468724 | orchestrator | Monday 06 April 2026 05:35:11 +0000 (0:00:00.188) 0:27:40.947 ********** 2026-04-06 05:35:11.468743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:35:11.468767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'uuids': ['83378823-14d2-4928-9007-67488abc99a7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp']}})  2026-04-06 05:35:11.468815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a868051', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:35:11.468855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3']}})  2026-04-06 05:35:11.468891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:35:11.468911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:35:11.468993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:35:11.469013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:35:11.469032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO', 'dm-uuid-CRYPT-LUKS2-dd6ed06a0d554d6181a429bf5c5222d7-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:35:11.469050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:35:11.469076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'uuids': ['dd6ed06a-0d55-4d61-81a4-29bf5c5222d7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO']}})  2026-04-06 05:35:11.469097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a']}})  2026-04-06 05:35:11.469118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:35:11.469171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '40f67feb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:35:11.799786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:35:11.799913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:35:11.799929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp', 'dm-uuid-CRYPT-LUKS2-8337882314d24928900767488abc99a7-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:35:11.799943 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:11.799956 | orchestrator | 2026-04-06 05:35:11.799967 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:35:11.800003 | orchestrator | Monday 06 April 2026 05:35:11 +0000 (0:00:00.379) 0:27:41.326 ********** 2026-04-06 05:35:11.800016 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:11.800029 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a', 'dm-uuid-LVM-5SBcK6LYcqc3U9JW4A7AEqQb9XhQaJZNALmkUrHWUZpUhCY8hyCk4SVv02FoAkUp'], 'uuids': ['83378823-14d2-4928-9007-67488abc99a7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:11.800041 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de', 'scsi-SQEMU_QEMU_HARDDISK_4a868051-6760-4c3b-ae8b-ad951cf235de'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a868051', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:11.800077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-9JZghf-Tj4T-hJH3-TdHl-k5PF-Zmcx-ynVATr', 'scsi-0QEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554', 'scsi-SQEMU_QEMU_HARDDISK_f369a6c0-cc6b-402f-8203-4a676105f554'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:11.800091 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:11.800109 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:11.800121 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:11.800132 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:11.800149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO', 'dm-uuid-CRYPT-LUKS2-dd6ed06a0d554d6181a429bf5c5222d7-7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:14.143833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:14.143943 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3-osd--block--c3bdc13a--4e4a--504e--9e7c--ad28314ab8c3', 'dm-uuid-LVM-UTQM7S53ibMHEifiI2Bv5Thw7s0lsM0j7tdY8LLV0Ub3l0Z8I0Y4chNDJ3j6J7vO'], 'uuids': ['dd6ed06a-0d55-4d61-81a4-29bf5c5222d7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f369a6c0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['7tdY8L-LV0U-b3l0-Z8I0-Y4ch-NDJ3-j6J7vO']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:14.143982 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-bmjYoX-DOC2-0AWC-rYYB-WEnJ-01uQ-WQd2JR', 'scsi-0QEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867', 'scsi-SQEMU_QEMU_HARDDISK_48ce9836-bd13-434e-b336-3f85c4684867'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ce9836', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8c307d7c--3927--5061--a8a8--155bb148bb1a-osd--block--8c307d7c--3927--5061--a8a8--155bb148bb1a']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:14.143999 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:14.144042 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '40f67feb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1', 'scsi-SQEMU_QEMU_HARDDISK_40f67feb-ef43-49bb-8f67-9921a7107336-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:14.144065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:14.144077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:14.144088 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp', 'dm-uuid-CRYPT-LUKS2-8337882314d24928900767488abc99a7-ALmkUr-HWUZ-pUhC-Y8hy-Ck4S-Vv02-FoAkUp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:35:14.144102 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:14.144115 | orchestrator | 2026-04-06 05:35:14.144128 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:35:14.144140 | orchestrator | Monday 06 April 2026 05:35:12 +0000 (0:00:00.402) 0:27:41.728 ********** 2026-04-06 05:35:14.144151 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:14.144163 | orchestrator | 2026-04-06 05:35:14.144174 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:35:14.144185 | orchestrator | Monday 06 April 2026 05:35:12 +0000 (0:00:00.489) 0:27:42.218 ********** 2026-04-06 05:35:14.144195 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:14.144206 | orchestrator | 2026-04-06 05:35:14.144217 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:35:14.144228 | orchestrator | Monday 06 April 2026 05:35:12 +0000 (0:00:00.155) 0:27:42.373 ********** 2026-04-06 05:35:14.144238 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:14.144249 | orchestrator | 2026-04-06 05:35:14.144260 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:35:14.144278 | orchestrator | Monday 06 April 2026 05:35:14 +0000 (0:00:01.482) 0:27:43.856 ********** 2026-04-06 05:35:29.946530 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.946703 | orchestrator | 2026-04-06 05:35:29.946721 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:35:29.946750 | orchestrator | Monday 06 April 2026 05:35:14 +0000 (0:00:00.142) 0:27:43.999 ********** 2026-04-06 05:35:29.946762 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.946773 | orchestrator | 2026-04-06 05:35:29.946785 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:35:29.946819 | orchestrator | Monday 06 April 2026 05:35:14 +0000 (0:00:00.234) 0:27:44.234 ********** 2026-04-06 05:35:29.946830 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.946841 | orchestrator | 2026-04-06 05:35:29.946853 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:35:29.946864 | orchestrator | Monday 06 April 2026 05:35:14 +0000 (0:00:00.175) 0:27:44.409 ********** 2026-04-06 05:35:29.946876 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-06 05:35:29.946887 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-06 05:35:29.946898 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-06 05:35:29.946909 | orchestrator | 2026-04-06 05:35:29.946920 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:35:29.946931 | orchestrator | Monday 06 April 2026 05:35:15 +0000 (0:00:01.066) 0:27:45.475 ********** 2026-04-06 05:35:29.946942 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-06 05:35:29.946953 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-06 05:35:29.946964 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-06 05:35:29.946974 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.946985 | orchestrator | 2026-04-06 05:35:29.946996 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:35:29.947007 | orchestrator | Monday 06 April 2026 05:35:15 +0000 (0:00:00.162) 0:27:45.638 ********** 2026-04-06 05:35:29.947018 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-06 05:35:29.947029 | orchestrator | 2026-04-06 05:35:29.947041 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:35:29.947053 | orchestrator | Monday 06 April 2026 05:35:16 +0000 (0:00:00.575) 0:27:46.213 ********** 2026-04-06 05:35:29.947066 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.947079 | orchestrator | 2026-04-06 05:35:29.947092 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:35:29.947104 | orchestrator | Monday 06 April 2026 05:35:16 +0000 (0:00:00.156) 0:27:46.370 ********** 2026-04-06 05:35:29.947117 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.947130 | orchestrator | 2026-04-06 05:35:29.947142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:35:29.947155 | orchestrator | Monday 06 April 2026 05:35:16 +0000 (0:00:00.153) 0:27:46.523 ********** 2026-04-06 05:35:29.947168 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.947180 | orchestrator | 2026-04-06 05:35:29.947193 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:35:29.947206 | orchestrator | Monday 06 April 2026 05:35:16 +0000 (0:00:00.159) 0:27:46.683 ********** 2026-04-06 05:35:29.947218 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:29.947232 | orchestrator | 2026-04-06 05:35:29.947245 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:35:29.947258 | orchestrator | Monday 06 April 2026 05:35:17 +0000 (0:00:00.266) 0:27:46.949 ********** 2026-04-06 05:35:29.947270 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:35:29.947283 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:35:29.947296 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:35:29.947308 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.947321 | orchestrator | 2026-04-06 05:35:29.947334 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:35:29.947346 | orchestrator | Monday 06 April 2026 05:35:17 +0000 (0:00:00.424) 0:27:47.373 ********** 2026-04-06 05:35:29.947359 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:35:29.947372 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:35:29.947392 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:35:29.947404 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.947415 | orchestrator | 2026-04-06 05:35:29.947426 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:35:29.947449 | orchestrator | Monday 06 April 2026 05:35:18 +0000 (0:00:00.398) 0:27:47.772 ********** 2026-04-06 05:35:29.947460 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:35:29.947471 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:35:29.947482 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:35:29.947492 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.947503 | orchestrator | 2026-04-06 05:35:29.947514 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:35:29.947525 | orchestrator | Monday 06 April 2026 05:35:18 +0000 (0:00:00.388) 0:27:48.160 ********** 2026-04-06 05:35:29.947536 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:29.947547 | orchestrator | 2026-04-06 05:35:29.947606 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:35:29.947617 | orchestrator | Monday 06 April 2026 05:35:18 +0000 (0:00:00.172) 0:27:48.332 ********** 2026-04-06 05:35:29.947628 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 05:35:29.947639 | orchestrator | 2026-04-06 05:35:29.947649 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:35:29.947660 | orchestrator | Monday 06 April 2026 05:35:18 +0000 (0:00:00.354) 0:27:48.686 ********** 2026-04-06 05:35:29.947690 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:35:29.947701 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:35:29.947719 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:35:29.947730 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:35:29.947741 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-06 05:35:29.947751 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:35:29.947762 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:35:29.947773 | orchestrator | 2026-04-06 05:35:29.947784 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:35:29.947794 | orchestrator | Monday 06 April 2026 05:35:20 +0000 (0:00:01.155) 0:27:49.841 ********** 2026-04-06 05:35:29.947805 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:35:29.947816 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:35:29.947826 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:35:29.947837 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:35:29.947848 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-06 05:35:29.947858 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-06 05:35:29.947869 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:35:29.947880 | orchestrator | 2026-04-06 05:35:29.947890 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-06 05:35:29.947901 | orchestrator | Monday 06 April 2026 05:35:21 +0000 (0:00:01.762) 0:27:51.604 ********** 2026-04-06 05:35:29.947912 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:35:29.947923 | orchestrator | 2026-04-06 05:35:29.947933 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-06 05:35:29.947944 | orchestrator | Monday 06 April 2026 05:35:23 +0000 (0:00:01.990) 0:27:53.594 ********** 2026-04-06 05:35:29.947955 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:35:29.947974 | orchestrator | 2026-04-06 05:35:29.947984 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-06 05:35:29.947995 | orchestrator | Monday 06 April 2026 05:35:25 +0000 (0:00:01.896) 0:27:55.491 ********** 2026-04-06 05:35:29.948006 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:35:29.948017 | orchestrator | 2026-04-06 05:35:29.948027 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:35:29.948038 | orchestrator | Monday 06 April 2026 05:35:27 +0000 (0:00:01.243) 0:27:56.735 ********** 2026-04-06 05:35:29.948049 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-06 05:35:29.948059 | orchestrator | 2026-04-06 05:35:29.948070 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:35:29.948081 | orchestrator | Monday 06 April 2026 05:35:27 +0000 (0:00:00.196) 0:27:56.931 ********** 2026-04-06 05:35:29.948092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-06 05:35:29.948102 | orchestrator | 2026-04-06 05:35:29.948113 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:35:29.948124 | orchestrator | Monday 06 April 2026 05:35:27 +0000 (0:00:00.263) 0:27:57.195 ********** 2026-04-06 05:35:29.948134 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.948145 | orchestrator | 2026-04-06 05:35:29.948155 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:35:29.948166 | orchestrator | Monday 06 April 2026 05:35:27 +0000 (0:00:00.145) 0:27:57.341 ********** 2026-04-06 05:35:29.948177 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:29.948188 | orchestrator | 2026-04-06 05:35:29.948198 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:35:29.948209 | orchestrator | Monday 06 April 2026 05:35:28 +0000 (0:00:00.528) 0:27:57.869 ********** 2026-04-06 05:35:29.948220 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:29.948230 | orchestrator | 2026-04-06 05:35:29.948241 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:35:29.948252 | orchestrator | Monday 06 April 2026 05:35:28 +0000 (0:00:00.522) 0:27:58.392 ********** 2026-04-06 05:35:29.948263 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:29.948273 | orchestrator | 2026-04-06 05:35:29.948284 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:35:29.948295 | orchestrator | Monday 06 April 2026 05:35:29 +0000 (0:00:00.537) 0:27:58.930 ********** 2026-04-06 05:35:29.948306 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.948316 | orchestrator | 2026-04-06 05:35:29.948327 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:35:29.948338 | orchestrator | Monday 06 April 2026 05:35:29 +0000 (0:00:00.142) 0:27:59.072 ********** 2026-04-06 05:35:29.948348 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.948359 | orchestrator | 2026-04-06 05:35:29.948370 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:35:29.948380 | orchestrator | Monday 06 April 2026 05:35:29 +0000 (0:00:00.148) 0:27:59.220 ********** 2026-04-06 05:35:29.948391 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:29.948402 | orchestrator | 2026-04-06 05:35:29.948413 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:35:29.948430 | orchestrator | Monday 06 April 2026 05:35:29 +0000 (0:00:00.432) 0:27:59.653 ********** 2026-04-06 05:35:41.147956 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.148068 | orchestrator | 2026-04-06 05:35:41.148111 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:35:41.148128 | orchestrator | Monday 06 April 2026 05:35:30 +0000 (0:00:00.548) 0:28:00.202 ********** 2026-04-06 05:35:41.148142 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.148182 | orchestrator | 2026-04-06 05:35:41.148197 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:35:41.148210 | orchestrator | Monday 06 April 2026 05:35:31 +0000 (0:00:00.529) 0:28:00.732 ********** 2026-04-06 05:35:41.148225 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.148239 | orchestrator | 2026-04-06 05:35:41.148253 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:35:41.148267 | orchestrator | Monday 06 April 2026 05:35:31 +0000 (0:00:00.135) 0:28:00.867 ********** 2026-04-06 05:35:41.148281 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.148294 | orchestrator | 2026-04-06 05:35:41.148309 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:35:41.148323 | orchestrator | Monday 06 April 2026 05:35:31 +0000 (0:00:00.139) 0:28:01.006 ********** 2026-04-06 05:35:41.148337 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.148351 | orchestrator | 2026-04-06 05:35:41.148365 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:35:41.148378 | orchestrator | Monday 06 April 2026 05:35:31 +0000 (0:00:00.150) 0:28:01.157 ********** 2026-04-06 05:35:41.148391 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.148405 | orchestrator | 2026-04-06 05:35:41.148419 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:35:41.148433 | orchestrator | Monday 06 April 2026 05:35:31 +0000 (0:00:00.165) 0:28:01.323 ********** 2026-04-06 05:35:41.148447 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.148460 | orchestrator | 2026-04-06 05:35:41.148473 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:35:41.148487 | orchestrator | Monday 06 April 2026 05:35:31 +0000 (0:00:00.159) 0:28:01.482 ********** 2026-04-06 05:35:41.148502 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.148517 | orchestrator | 2026-04-06 05:35:41.148601 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:35:41.148621 | orchestrator | Monday 06 April 2026 05:35:31 +0000 (0:00:00.135) 0:28:01.618 ********** 2026-04-06 05:35:41.148638 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.148654 | orchestrator | 2026-04-06 05:35:41.148670 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:35:41.148699 | orchestrator | Monday 06 April 2026 05:35:32 +0000 (0:00:00.150) 0:28:01.768 ********** 2026-04-06 05:35:41.148714 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.148728 | orchestrator | 2026-04-06 05:35:41.148741 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:35:41.148755 | orchestrator | Monday 06 April 2026 05:35:32 +0000 (0:00:00.150) 0:28:01.919 ********** 2026-04-06 05:35:41.148768 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.148781 | orchestrator | 2026-04-06 05:35:41.148794 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:35:41.148806 | orchestrator | Monday 06 April 2026 05:35:32 +0000 (0:00:00.151) 0:28:02.070 ********** 2026-04-06 05:35:41.148819 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.148832 | orchestrator | 2026-04-06 05:35:41.148845 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:35:41.148859 | orchestrator | Monday 06 April 2026 05:35:32 +0000 (0:00:00.581) 0:28:02.652 ********** 2026-04-06 05:35:41.148871 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.148884 | orchestrator | 2026-04-06 05:35:41.148897 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:35:41.148910 | orchestrator | Monday 06 April 2026 05:35:33 +0000 (0:00:00.181) 0:28:02.834 ********** 2026-04-06 05:35:41.148924 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.148938 | orchestrator | 2026-04-06 05:35:41.148951 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:35:41.148964 | orchestrator | Monday 06 April 2026 05:35:33 +0000 (0:00:00.120) 0:28:02.954 ********** 2026-04-06 05:35:41.148978 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149008 | orchestrator | 2026-04-06 05:35:41.149017 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:35:41.149025 | orchestrator | Monday 06 April 2026 05:35:33 +0000 (0:00:00.135) 0:28:03.090 ********** 2026-04-06 05:35:41.149032 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149040 | orchestrator | 2026-04-06 05:35:41.149049 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:35:41.149057 | orchestrator | Monday 06 April 2026 05:35:33 +0000 (0:00:00.145) 0:28:03.236 ********** 2026-04-06 05:35:41.149065 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149072 | orchestrator | 2026-04-06 05:35:41.149081 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:35:41.149088 | orchestrator | Monday 06 April 2026 05:35:33 +0000 (0:00:00.147) 0:28:03.383 ********** 2026-04-06 05:35:41.149097 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149105 | orchestrator | 2026-04-06 05:35:41.149113 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:35:41.149121 | orchestrator | Monday 06 April 2026 05:35:33 +0000 (0:00:00.139) 0:28:03.523 ********** 2026-04-06 05:35:41.149129 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149137 | orchestrator | 2026-04-06 05:35:41.149145 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:35:41.149154 | orchestrator | Monday 06 April 2026 05:35:33 +0000 (0:00:00.143) 0:28:03.667 ********** 2026-04-06 05:35:41.149162 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149169 | orchestrator | 2026-04-06 05:35:41.149177 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:35:41.149185 | orchestrator | Monday 06 April 2026 05:35:34 +0000 (0:00:00.144) 0:28:03.812 ********** 2026-04-06 05:35:41.149193 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149201 | orchestrator | 2026-04-06 05:35:41.149230 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:35:41.149248 | orchestrator | Monday 06 April 2026 05:35:34 +0000 (0:00:00.113) 0:28:03.925 ********** 2026-04-06 05:35:41.149256 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149264 | orchestrator | 2026-04-06 05:35:41.149272 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:35:41.149280 | orchestrator | Monday 06 April 2026 05:35:34 +0000 (0:00:00.131) 0:28:04.056 ********** 2026-04-06 05:35:41.149288 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149295 | orchestrator | 2026-04-06 05:35:41.149303 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:35:41.149311 | orchestrator | Monday 06 April 2026 05:35:34 +0000 (0:00:00.129) 0:28:04.186 ********** 2026-04-06 05:35:41.149319 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149327 | orchestrator | 2026-04-06 05:35:41.149335 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:35:41.149346 | orchestrator | Monday 06 April 2026 05:35:35 +0000 (0:00:00.541) 0:28:04.728 ********** 2026-04-06 05:35:41.149359 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.149373 | orchestrator | 2026-04-06 05:35:41.149387 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:35:41.149399 | orchestrator | Monday 06 April 2026 05:35:35 +0000 (0:00:00.919) 0:28:05.647 ********** 2026-04-06 05:35:41.149411 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.149423 | orchestrator | 2026-04-06 05:35:41.149436 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:35:41.149448 | orchestrator | Monday 06 April 2026 05:35:37 +0000 (0:00:01.209) 0:28:06.857 ********** 2026-04-06 05:35:41.149462 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-06 05:35:41.149476 | orchestrator | 2026-04-06 05:35:41.149489 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:35:41.149503 | orchestrator | Monday 06 April 2026 05:35:37 +0000 (0:00:00.246) 0:28:07.103 ********** 2026-04-06 05:35:41.149526 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149565 | orchestrator | 2026-04-06 05:35:41.149576 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:35:41.149587 | orchestrator | Monday 06 April 2026 05:35:37 +0000 (0:00:00.153) 0:28:07.257 ********** 2026-04-06 05:35:41.149599 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149609 | orchestrator | 2026-04-06 05:35:41.149617 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:35:41.149625 | orchestrator | Monday 06 April 2026 05:35:37 +0000 (0:00:00.147) 0:28:07.404 ********** 2026-04-06 05:35:41.149633 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:35:41.149641 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:35:41.149649 | orchestrator | 2026-04-06 05:35:41.149657 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:35:41.149665 | orchestrator | Monday 06 April 2026 05:35:38 +0000 (0:00:00.827) 0:28:08.232 ********** 2026-04-06 05:35:41.149673 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.149681 | orchestrator | 2026-04-06 05:35:41.149688 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:35:41.149696 | orchestrator | Monday 06 April 2026 05:35:38 +0000 (0:00:00.436) 0:28:08.668 ********** 2026-04-06 05:35:41.149704 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149712 | orchestrator | 2026-04-06 05:35:41.149720 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:35:41.149728 | orchestrator | Monday 06 April 2026 05:35:39 +0000 (0:00:00.168) 0:28:08.836 ********** 2026-04-06 05:35:41.149736 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149744 | orchestrator | 2026-04-06 05:35:41.149752 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:35:41.149759 | orchestrator | Monday 06 April 2026 05:35:39 +0000 (0:00:00.171) 0:28:09.008 ********** 2026-04-06 05:35:41.149767 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149775 | orchestrator | 2026-04-06 05:35:41.149783 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:35:41.149791 | orchestrator | Monday 06 April 2026 05:35:39 +0000 (0:00:00.156) 0:28:09.164 ********** 2026-04-06 05:35:41.149799 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-06 05:35:41.149806 | orchestrator | 2026-04-06 05:35:41.149814 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:35:41.149822 | orchestrator | Monday 06 April 2026 05:35:39 +0000 (0:00:00.503) 0:28:09.667 ********** 2026-04-06 05:35:41.149830 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:41.149838 | orchestrator | 2026-04-06 05:35:41.149846 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:35:41.149853 | orchestrator | Monday 06 April 2026 05:35:40 +0000 (0:00:00.707) 0:28:10.375 ********** 2026-04-06 05:35:41.149862 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:35:41.149869 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:35:41.149877 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:35:41.149885 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149893 | orchestrator | 2026-04-06 05:35:41.149901 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:35:41.149908 | orchestrator | Monday 06 April 2026 05:35:40 +0000 (0:00:00.162) 0:28:10.538 ********** 2026-04-06 05:35:41.149916 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149924 | orchestrator | 2026-04-06 05:35:41.149932 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:35:41.149940 | orchestrator | Monday 06 April 2026 05:35:40 +0000 (0:00:00.148) 0:28:10.686 ********** 2026-04-06 05:35:41.149954 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:41.149962 | orchestrator | 2026-04-06 05:35:41.149978 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:35:59.168473 | orchestrator | Monday 06 April 2026 05:35:41 +0000 (0:00:00.170) 0:28:10.857 ********** 2026-04-06 05:35:59.168625 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.168641 | orchestrator | 2026-04-06 05:35:59.168652 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:35:59.168662 | orchestrator | Monday 06 April 2026 05:35:41 +0000 (0:00:00.161) 0:28:11.019 ********** 2026-04-06 05:35:59.168672 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.168682 | orchestrator | 2026-04-06 05:35:59.168691 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:35:59.168701 | orchestrator | Monday 06 April 2026 05:35:41 +0000 (0:00:00.171) 0:28:11.190 ********** 2026-04-06 05:35:59.168711 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.168720 | orchestrator | 2026-04-06 05:35:59.168730 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:35:59.168739 | orchestrator | Monday 06 April 2026 05:35:41 +0000 (0:00:00.165) 0:28:11.355 ********** 2026-04-06 05:35:59.168749 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:59.168759 | orchestrator | 2026-04-06 05:35:59.168769 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:35:59.168779 | orchestrator | Monday 06 April 2026 05:35:43 +0000 (0:00:01.505) 0:28:12.861 ********** 2026-04-06 05:35:59.168789 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:59.168798 | orchestrator | 2026-04-06 05:35:59.168808 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:35:59.168817 | orchestrator | Monday 06 April 2026 05:35:43 +0000 (0:00:00.138) 0:28:12.999 ********** 2026-04-06 05:35:59.168827 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-06 05:35:59.168836 | orchestrator | 2026-04-06 05:35:59.168846 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:35:59.168855 | orchestrator | Monday 06 April 2026 05:35:43 +0000 (0:00:00.231) 0:28:13.230 ********** 2026-04-06 05:35:59.168865 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.168875 | orchestrator | 2026-04-06 05:35:59.168884 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:35:59.168893 | orchestrator | Monday 06 April 2026 05:35:43 +0000 (0:00:00.165) 0:28:13.396 ********** 2026-04-06 05:35:59.168903 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.168913 | orchestrator | 2026-04-06 05:35:59.168922 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:35:59.168932 | orchestrator | Monday 06 April 2026 05:35:43 +0000 (0:00:00.144) 0:28:13.541 ********** 2026-04-06 05:35:59.168941 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.168951 | orchestrator | 2026-04-06 05:35:59.168960 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:35:59.168970 | orchestrator | Monday 06 April 2026 05:35:44 +0000 (0:00:00.476) 0:28:14.018 ********** 2026-04-06 05:35:59.168980 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.168989 | orchestrator | 2026-04-06 05:35:59.168999 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:35:59.169017 | orchestrator | Monday 06 April 2026 05:35:44 +0000 (0:00:00.154) 0:28:14.172 ********** 2026-04-06 05:35:59.169035 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.169052 | orchestrator | 2026-04-06 05:35:59.169078 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:35:59.169096 | orchestrator | Monday 06 April 2026 05:35:44 +0000 (0:00:00.170) 0:28:14.343 ********** 2026-04-06 05:35:59.169113 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.169130 | orchestrator | 2026-04-06 05:35:59.169147 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:35:59.169162 | orchestrator | Monday 06 April 2026 05:35:44 +0000 (0:00:00.146) 0:28:14.490 ********** 2026-04-06 05:35:59.169208 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.169226 | orchestrator | 2026-04-06 05:35:59.169244 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:35:59.169260 | orchestrator | Monday 06 April 2026 05:35:44 +0000 (0:00:00.218) 0:28:14.708 ********** 2026-04-06 05:35:59.169275 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.169292 | orchestrator | 2026-04-06 05:35:59.169308 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:35:59.169326 | orchestrator | Monday 06 April 2026 05:35:45 +0000 (0:00:00.158) 0:28:14.866 ********** 2026-04-06 05:35:59.169344 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:35:59.169361 | orchestrator | 2026-04-06 05:35:59.169379 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:35:59.169390 | orchestrator | Monday 06 April 2026 05:35:45 +0000 (0:00:00.258) 0:28:15.125 ********** 2026-04-06 05:35:59.169400 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-06 05:35:59.169411 | orchestrator | 2026-04-06 05:35:59.169421 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:35:59.169430 | orchestrator | Monday 06 April 2026 05:35:45 +0000 (0:00:00.238) 0:28:15.364 ********** 2026-04-06 05:35:59.169440 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-06 05:35:59.169450 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-06 05:35:59.169459 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-06 05:35:59.169469 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-06 05:35:59.169478 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-06 05:35:59.169488 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-06 05:35:59.169529 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-06 05:35:59.169540 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:35:59.169549 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:35:59.169559 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:35:59.169569 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:35:59.169605 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:35:59.169617 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:35:59.169627 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:35:59.169636 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-06 05:35:59.169646 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-06 05:35:59.169655 | orchestrator | 2026-04-06 05:35:59.169665 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:35:59.169674 | orchestrator | Monday 06 April 2026 05:35:51 +0000 (0:00:05.489) 0:28:20.853 ********** 2026-04-06 05:35:59.169684 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-06 05:35:59.169693 | orchestrator | 2026-04-06 05:35:59.169703 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-06 05:35:59.169712 | orchestrator | Monday 06 April 2026 05:35:51 +0000 (0:00:00.227) 0:28:21.081 ********** 2026-04-06 05:35:59.169722 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:35:59.169733 | orchestrator | 2026-04-06 05:35:59.169742 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-06 05:35:59.169752 | orchestrator | Monday 06 April 2026 05:35:52 +0000 (0:00:00.827) 0:28:21.908 ********** 2026-04-06 05:35:59.169762 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:35:59.169781 | orchestrator | 2026-04-06 05:35:59.169791 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:35:59.169800 | orchestrator | Monday 06 April 2026 05:35:53 +0000 (0:00:00.963) 0:28:22.871 ********** 2026-04-06 05:35:59.169810 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.169819 | orchestrator | 2026-04-06 05:35:59.169829 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:35:59.169838 | orchestrator | Monday 06 April 2026 05:35:53 +0000 (0:00:00.149) 0:28:23.020 ********** 2026-04-06 05:35:59.169848 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.169857 | orchestrator | 2026-04-06 05:35:59.169867 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:35:59.169876 | orchestrator | Monday 06 April 2026 05:35:53 +0000 (0:00:00.142) 0:28:23.163 ********** 2026-04-06 05:35:59.169886 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.169895 | orchestrator | 2026-04-06 05:35:59.169904 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:35:59.169914 | orchestrator | Monday 06 April 2026 05:35:53 +0000 (0:00:00.158) 0:28:23.321 ********** 2026-04-06 05:35:59.169923 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.169933 | orchestrator | 2026-04-06 05:35:59.169942 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:35:59.169952 | orchestrator | Monday 06 April 2026 05:35:53 +0000 (0:00:00.136) 0:28:23.458 ********** 2026-04-06 05:35:59.169961 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.169971 | orchestrator | 2026-04-06 05:35:59.169980 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:35:59.169990 | orchestrator | Monday 06 April 2026 05:35:53 +0000 (0:00:00.157) 0:28:23.615 ********** 2026-04-06 05:35:59.169999 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.170009 | orchestrator | 2026-04-06 05:35:59.170078 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:35:59.170089 | orchestrator | Monday 06 April 2026 05:35:54 +0000 (0:00:00.153) 0:28:23.768 ********** 2026-04-06 05:35:59.170098 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.170108 | orchestrator | 2026-04-06 05:35:59.170127 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:35:59.170137 | orchestrator | Monday 06 April 2026 05:35:54 +0000 (0:00:00.148) 0:28:23.917 ********** 2026-04-06 05:35:59.170146 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.170156 | orchestrator | 2026-04-06 05:35:59.170166 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:35:59.170175 | orchestrator | Monday 06 April 2026 05:35:54 +0000 (0:00:00.143) 0:28:24.061 ********** 2026-04-06 05:35:59.170185 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.170194 | orchestrator | 2026-04-06 05:35:59.170204 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:35:59.170214 | orchestrator | Monday 06 April 2026 05:35:54 +0000 (0:00:00.137) 0:28:24.198 ********** 2026-04-06 05:35:59.170223 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.170233 | orchestrator | 2026-04-06 05:35:59.170242 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:35:59.170252 | orchestrator | Monday 06 April 2026 05:35:54 +0000 (0:00:00.170) 0:28:24.369 ********** 2026-04-06 05:35:59.170261 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:35:59.170271 | orchestrator | 2026-04-06 05:35:59.170280 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:35:59.170290 | orchestrator | Monday 06 April 2026 05:35:54 +0000 (0:00:00.142) 0:28:24.512 ********** 2026-04-06 05:35:59.170299 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:35:59.170309 | orchestrator | 2026-04-06 05:35:59.170319 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:35:59.170328 | orchestrator | Monday 06 April 2026 05:35:58 +0000 (0:00:04.182) 0:28:28.694 ********** 2026-04-06 05:35:59.170345 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:35:59.170355 | orchestrator | 2026-04-06 05:35:59.170370 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:36:21.330230 | orchestrator | Monday 06 April 2026 05:35:59 +0000 (0:00:00.177) 0:28:28.871 ********** 2026-04-06 05:36:21.330358 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-06 05:36:21.330388 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-06 05:36:21.330409 | orchestrator | 2026-04-06 05:36:21.330422 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:36:21.330433 | orchestrator | Monday 06 April 2026 05:36:02 +0000 (0:00:03.764) 0:28:32.635 ********** 2026-04-06 05:36:21.330445 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:36:21.330457 | orchestrator | 2026-04-06 05:36:21.330532 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:36:21.330543 | orchestrator | Monday 06 April 2026 05:36:03 +0000 (0:00:00.129) 0:28:32.765 ********** 2026-04-06 05:36:21.330554 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:36:21.330565 | orchestrator | 2026-04-06 05:36:21.330577 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:36:21.330589 | orchestrator | Monday 06 April 2026 05:36:03 +0000 (0:00:00.125) 0:28:32.891 ********** 2026-04-06 05:36:21.330601 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:36:21.330612 | orchestrator | 2026-04-06 05:36:21.330623 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:36:21.330635 | orchestrator | Monday 06 April 2026 05:36:03 +0000 (0:00:00.157) 0:28:33.048 ********** 2026-04-06 05:36:21.330646 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:36:21.330656 | orchestrator | 2026-04-06 05:36:21.330667 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:36:21.330678 | orchestrator | Monday 06 April 2026 05:36:03 +0000 (0:00:00.161) 0:28:33.209 ********** 2026-04-06 05:36:21.330689 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:36:21.330700 | orchestrator | 2026-04-06 05:36:21.330710 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:36:21.330721 | orchestrator | Monday 06 April 2026 05:36:03 +0000 (0:00:00.160) 0:28:33.370 ********** 2026-04-06 05:36:21.330732 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:36:21.330744 | orchestrator | 2026-04-06 05:36:21.330755 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:36:21.330768 | orchestrator | Monday 06 April 2026 05:36:03 +0000 (0:00:00.249) 0:28:33.620 ********** 2026-04-06 05:36:21.330780 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:36:21.330793 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:36:21.330805 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:36:21.330818 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:36:21.330831 | orchestrator | 2026-04-06 05:36:21.330843 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:36:21.330856 | orchestrator | Monday 06 April 2026 05:36:04 +0000 (0:00:00.434) 0:28:34.054 ********** 2026-04-06 05:36:21.330868 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:36:21.330903 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:36:21.330915 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:36:21.330928 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:36:21.330940 | orchestrator | 2026-04-06 05:36:21.330952 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:36:21.330964 | orchestrator | Monday 06 April 2026 05:36:04 +0000 (0:00:00.443) 0:28:34.497 ********** 2026-04-06 05:36:21.330977 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-06 05:36:21.330990 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-06 05:36:21.331003 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-06 05:36:21.331015 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:36:21.331028 | orchestrator | 2026-04-06 05:36:21.331041 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:36:21.331054 | orchestrator | Monday 06 April 2026 05:36:05 +0000 (0:00:00.796) 0:28:35.294 ********** 2026-04-06 05:36:21.331066 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:36:21.331079 | orchestrator | 2026-04-06 05:36:21.331092 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:36:21.331105 | orchestrator | Monday 06 April 2026 05:36:05 +0000 (0:00:00.165) 0:28:35.459 ********** 2026-04-06 05:36:21.331118 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-06 05:36:21.331132 | orchestrator | 2026-04-06 05:36:21.331143 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:36:21.331154 | orchestrator | Monday 06 April 2026 05:36:06 +0000 (0:00:01.113) 0:28:36.573 ********** 2026-04-06 05:36:21.331165 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:36:21.331175 | orchestrator | 2026-04-06 05:36:21.331186 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-06 05:36:21.331197 | orchestrator | Monday 06 April 2026 05:36:07 +0000 (0:00:00.844) 0:28:37.417 ********** 2026-04-06 05:36:21.331208 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-04-06 05:36:21.331218 | orchestrator | 2026-04-06 05:36:21.331254 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-06 05:36:21.331266 | orchestrator | Monday 06 April 2026 05:36:07 +0000 (0:00:00.193) 0:28:37.610 ********** 2026-04-06 05:36:21.331278 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:36:21.331289 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-06 05:36:21.331300 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:36:21.331310 | orchestrator | 2026-04-06 05:36:21.331321 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:36:21.331332 | orchestrator | Monday 06 April 2026 05:36:10 +0000 (0:00:02.177) 0:28:39.788 ********** 2026-04-06 05:36:21.331342 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-06 05:36:21.331353 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-06 05:36:21.331364 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:36:21.331374 | orchestrator | 2026-04-06 05:36:21.331385 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-06 05:36:21.331396 | orchestrator | Monday 06 April 2026 05:36:11 +0000 (0:00:00.957) 0:28:40.745 ********** 2026-04-06 05:36:21.331407 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:36:21.331418 | orchestrator | 2026-04-06 05:36:21.331428 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-06 05:36:21.331439 | orchestrator | Monday 06 April 2026 05:36:11 +0000 (0:00:00.119) 0:28:40.865 ********** 2026-04-06 05:36:21.331450 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-04-06 05:36:21.331482 | orchestrator | 2026-04-06 05:36:21.331494 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-06 05:36:21.331505 | orchestrator | Monday 06 April 2026 05:36:11 +0000 (0:00:00.219) 0:28:41.085 ********** 2026-04-06 05:36:21.331528 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:36:21.331541 | orchestrator | 2026-04-06 05:36:21.331551 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-06 05:36:21.331562 | orchestrator | Monday 06 April 2026 05:36:11 +0000 (0:00:00.604) 0:28:41.689 ********** 2026-04-06 05:36:21.331573 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:36:21.331583 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-06 05:36:21.331594 | orchestrator | 2026-04-06 05:36:21.331605 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-06 05:36:21.331615 | orchestrator | Monday 06 April 2026 05:36:16 +0000 (0:00:04.279) 0:28:45.970 ********** 2026-04-06 05:36:21.331626 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:36:21.331637 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:36:21.331647 | orchestrator | 2026-04-06 05:36:21.331658 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:36:21.331669 | orchestrator | Monday 06 April 2026 05:36:18 +0000 (0:00:02.723) 0:28:48.693 ********** 2026-04-06 05:36:21.331679 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-06 05:36:21.331690 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:36:21.331701 | orchestrator | 2026-04-06 05:36:21.331711 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-06 05:36:21.331722 | orchestrator | Monday 06 April 2026 05:36:19 +0000 (0:00:01.014) 0:28:49.708 ********** 2026-04-06 05:36:21.331733 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-04-06 05:36:21.331743 | orchestrator | 2026-04-06 05:36:21.331754 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-06 05:36:21.331765 | orchestrator | Monday 06 April 2026 05:36:20 +0000 (0:00:00.246) 0:28:49.954 ********** 2026-04-06 05:36:21.331775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:36:21.331787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:36:21.331798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:36:21.331809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:36:21.331820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:36:21.331830 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:36:21.331841 | orchestrator | 2026-04-06 05:36:21.331852 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-06 05:36:21.331862 | orchestrator | Monday 06 April 2026 05:36:20 +0000 (0:00:00.625) 0:28:50.579 ********** 2026-04-06 05:36:21.331873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:36:21.331884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:36:21.331895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:36:21.331918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:37:05.089469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:37:05.089670 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:37:05.089703 | orchestrator | 2026-04-06 05:37:05.089724 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-06 05:37:05.089746 | orchestrator | Monday 06 April 2026 05:36:21 +0000 (0:00:00.635) 0:28:51.215 ********** 2026-04-06 05:37:05.089767 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:37:05.089789 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:37:05.089808 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:37:05.089827 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:37:05.089848 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:37:05.089867 | orchestrator | 2026-04-06 05:37:05.089885 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-06 05:37:05.089903 | orchestrator | Monday 06 April 2026 05:36:52 +0000 (0:00:30.620) 0:29:21.835 ********** 2026-04-06 05:37:05.089923 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:37:05.089942 | orchestrator | 2026-04-06 05:37:05.089963 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-06 05:37:05.089983 | orchestrator | Monday 06 April 2026 05:36:52 +0000 (0:00:00.131) 0:29:21.967 ********** 2026-04-06 05:37:05.090004 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:37:05.090081 | orchestrator | 2026-04-06 05:37:05.090102 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-06 05:37:05.090121 | orchestrator | Monday 06 April 2026 05:36:52 +0000 (0:00:00.117) 0:29:22.085 ********** 2026-04-06 05:37:05.090140 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-04-06 05:37:05.090175 | orchestrator | 2026-04-06 05:37:05.090195 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-06 05:37:05.090214 | orchestrator | Monday 06 April 2026 05:36:52 +0000 (0:00:00.208) 0:29:22.293 ********** 2026-04-06 05:37:05.090232 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-04-06 05:37:05.090251 | orchestrator | 2026-04-06 05:37:05.090272 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-06 05:37:05.090293 | orchestrator | Monday 06 April 2026 05:36:52 +0000 (0:00:00.200) 0:29:22.494 ********** 2026-04-06 05:37:05.090314 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:37:05.090336 | orchestrator | 2026-04-06 05:37:05.090353 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-06 05:37:05.090365 | orchestrator | Monday 06 April 2026 05:36:53 +0000 (0:00:01.053) 0:29:23.548 ********** 2026-04-06 05:37:05.090376 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:37:05.090386 | orchestrator | 2026-04-06 05:37:05.090474 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-06 05:37:05.090487 | orchestrator | Monday 06 April 2026 05:36:55 +0000 (0:00:01.206) 0:29:24.754 ********** 2026-04-06 05:37:05.090498 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:37:05.090509 | orchestrator | 2026-04-06 05:37:05.090520 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-06 05:37:05.090531 | orchestrator | Monday 06 April 2026 05:36:56 +0000 (0:00:01.252) 0:29:26.007 ********** 2026-04-06 05:37:05.090542 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-06 05:37:05.090569 | orchestrator | 2026-04-06 05:37:05.090580 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-06 05:37:05.090591 | orchestrator | 2026-04-06 05:37:05.090602 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:37:05.090613 | orchestrator | Monday 06 April 2026 05:36:58 +0000 (0:00:02.401) 0:29:28.408 ********** 2026-04-06 05:37:05.090624 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-06 05:37:05.090635 | orchestrator | 2026-04-06 05:37:05.090645 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-06 05:37:05.090656 | orchestrator | Monday 06 April 2026 05:36:58 +0000 (0:00:00.234) 0:29:28.643 ********** 2026-04-06 05:37:05.090667 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:05.090678 | orchestrator | 2026-04-06 05:37:05.090689 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-06 05:37:05.090700 | orchestrator | Monday 06 April 2026 05:36:59 +0000 (0:00:00.484) 0:29:29.127 ********** 2026-04-06 05:37:05.090710 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:05.090721 | orchestrator | 2026-04-06 05:37:05.090732 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:37:05.090743 | orchestrator | Monday 06 April 2026 05:36:59 +0000 (0:00:00.136) 0:29:29.264 ********** 2026-04-06 05:37:05.090754 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:05.090765 | orchestrator | 2026-04-06 05:37:05.090776 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:37:05.090787 | orchestrator | Monday 06 April 2026 05:36:59 +0000 (0:00:00.443) 0:29:29.708 ********** 2026-04-06 05:37:05.090798 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:05.090809 | orchestrator | 2026-04-06 05:37:05.090846 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-06 05:37:05.090857 | orchestrator | Monday 06 April 2026 05:37:00 +0000 (0:00:00.160) 0:29:29.868 ********** 2026-04-06 05:37:05.090868 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:05.090879 | orchestrator | 2026-04-06 05:37:05.090890 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-06 05:37:05.090901 | orchestrator | Monday 06 April 2026 05:37:00 +0000 (0:00:00.146) 0:29:30.015 ********** 2026-04-06 05:37:05.090912 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:05.090923 | orchestrator | 2026-04-06 05:37:05.090934 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-06 05:37:05.090945 | orchestrator | Monday 06 April 2026 05:37:00 +0000 (0:00:00.463) 0:29:30.479 ********** 2026-04-06 05:37:05.090955 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:05.090966 | orchestrator | 2026-04-06 05:37:05.090977 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-06 05:37:05.090988 | orchestrator | Monday 06 April 2026 05:37:00 +0000 (0:00:00.155) 0:29:30.635 ********** 2026-04-06 05:37:05.090999 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:05.091010 | orchestrator | 2026-04-06 05:37:05.091021 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-06 05:37:05.091031 | orchestrator | Monday 06 April 2026 05:37:01 +0000 (0:00:00.147) 0:29:30.782 ********** 2026-04-06 05:37:05.091042 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:37:05.091053 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:37:05.091064 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:37:05.091075 | orchestrator | 2026-04-06 05:37:05.091085 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-06 05:37:05.091096 | orchestrator | Monday 06 April 2026 05:37:01 +0000 (0:00:00.741) 0:29:31.524 ********** 2026-04-06 05:37:05.091107 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:05.091118 | orchestrator | 2026-04-06 05:37:05.091129 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-06 05:37:05.091139 | orchestrator | Monday 06 April 2026 05:37:02 +0000 (0:00:00.252) 0:29:31.776 ********** 2026-04-06 05:37:05.091157 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:37:05.091168 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:37:05.091179 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:37:05.091189 | orchestrator | 2026-04-06 05:37:05.091200 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-06 05:37:05.091211 | orchestrator | Monday 06 April 2026 05:37:03 +0000 (0:00:01.867) 0:29:33.644 ********** 2026-04-06 05:37:05.091222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-06 05:37:05.091233 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-06 05:37:05.091244 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-06 05:37:05.091263 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:05.091281 | orchestrator | 2026-04-06 05:37:05.091300 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-06 05:37:05.091318 | orchestrator | Monday 06 April 2026 05:37:04 +0000 (0:00:00.454) 0:29:34.098 ********** 2026-04-06 05:37:05.091339 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-06 05:37:05.091362 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-06 05:37:05.091383 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-06 05:37:05.091429 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:05.091449 | orchestrator | 2026-04-06 05:37:05.091469 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-06 05:37:05.091488 | orchestrator | Monday 06 April 2026 05:37:05 +0000 (0:00:00.627) 0:29:34.726 ********** 2026-04-06 05:37:05.091634 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:05.091680 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.434751 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.434889 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:09.434908 | orchestrator | 2026-04-06 05:37:09.434921 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-06 05:37:09.434934 | orchestrator | Monday 06 April 2026 05:37:05 +0000 (0:00:00.180) 0:29:34.907 ********** 2026-04-06 05:37:09.434979 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '06ed7bf51830', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-06 05:37:02.624782', 'end': '2026-04-06 05:37:02.668161', 'delta': '0:00:00.043379', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06ed7bf51830'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-06 05:37:09.434996 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6879ce368bbc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-06 05:37:03.154854', 'end': '2026-04-06 05:37:03.197359', 'delta': '0:00:00.042505', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6879ce368bbc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-06 05:37:09.435008 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'a00606ebddc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-06 05:37:03.731556', 'end': '2026-04-06 05:37:03.781304', 'delta': '0:00:00.049748', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a00606ebddc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-06 05:37:09.435019 | orchestrator | 2026-04-06 05:37:09.435031 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-06 05:37:09.435042 | orchestrator | Monday 06 April 2026 05:37:05 +0000 (0:00:00.215) 0:29:35.122 ********** 2026-04-06 05:37:09.435054 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:09.435066 | orchestrator | 2026-04-06 05:37:09.435077 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-06 05:37:09.435088 | orchestrator | Monday 06 April 2026 05:37:05 +0000 (0:00:00.255) 0:29:35.378 ********** 2026-04-06 05:37:09.435099 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:09.435124 | orchestrator | 2026-04-06 05:37:09.435145 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-06 05:37:09.435157 | orchestrator | Monday 06 April 2026 05:37:05 +0000 (0:00:00.244) 0:29:35.622 ********** 2026-04-06 05:37:09.435168 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:09.435179 | orchestrator | 2026-04-06 05:37:09.435189 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-06 05:37:09.435200 | orchestrator | Monday 06 April 2026 05:37:06 +0000 (0:00:00.177) 0:29:35.799 ********** 2026-04-06 05:37:09.435211 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-06 05:37:09.435223 | orchestrator | 2026-04-06 05:37:09.435236 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:37:09.435250 | orchestrator | Monday 06 April 2026 05:37:07 +0000 (0:00:01.321) 0:29:37.121 ********** 2026-04-06 05:37:09.435264 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:09.435277 | orchestrator | 2026-04-06 05:37:09.435309 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-06 05:37:09.435331 | orchestrator | Monday 06 April 2026 05:37:07 +0000 (0:00:00.463) 0:29:37.584 ********** 2026-04-06 05:37:09.435365 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:09.435379 | orchestrator | 2026-04-06 05:37:09.435627 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-06 05:37:09.435693 | orchestrator | Monday 06 April 2026 05:37:08 +0000 (0:00:00.139) 0:29:37.724 ********** 2026-04-06 05:37:09.435702 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:09.435712 | orchestrator | 2026-04-06 05:37:09.435719 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-06 05:37:09.435727 | orchestrator | Monday 06 April 2026 05:37:08 +0000 (0:00:00.243) 0:29:37.967 ********** 2026-04-06 05:37:09.435734 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:09.435741 | orchestrator | 2026-04-06 05:37:09.435748 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-06 05:37:09.435755 | orchestrator | Monday 06 April 2026 05:37:08 +0000 (0:00:00.127) 0:29:38.095 ********** 2026-04-06 05:37:09.435761 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:09.435768 | orchestrator | 2026-04-06 05:37:09.435775 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-06 05:37:09.435782 | orchestrator | Monday 06 April 2026 05:37:08 +0000 (0:00:00.142) 0:29:38.238 ********** 2026-04-06 05:37:09.435788 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:09.435797 | orchestrator | 2026-04-06 05:37:09.435804 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-06 05:37:09.435810 | orchestrator | Monday 06 April 2026 05:37:08 +0000 (0:00:00.178) 0:29:38.417 ********** 2026-04-06 05:37:09.435817 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:09.435823 | orchestrator | 2026-04-06 05:37:09.435830 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-06 05:37:09.435837 | orchestrator | Monday 06 April 2026 05:37:08 +0000 (0:00:00.136) 0:29:38.553 ********** 2026-04-06 05:37:09.435843 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:09.435850 | orchestrator | 2026-04-06 05:37:09.435856 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-06 05:37:09.435863 | orchestrator | Monday 06 April 2026 05:37:09 +0000 (0:00:00.176) 0:29:38.730 ********** 2026-04-06 05:37:09.435870 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:09.435876 | orchestrator | 2026-04-06 05:37:09.435883 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-06 05:37:09.435891 | orchestrator | Monday 06 April 2026 05:37:09 +0000 (0:00:00.146) 0:29:38.877 ********** 2026-04-06 05:37:09.435897 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:09.435904 | orchestrator | 2026-04-06 05:37:09.435911 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-06 05:37:09.435917 | orchestrator | Monday 06 April 2026 05:37:09 +0000 (0:00:00.166) 0:29:39.043 ********** 2026-04-06 05:37:09.435927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:37:09.435939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'uuids': ['22ded8c8-9142-404c-a572-856e0a8f4fba'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP']}})  2026-04-06 05:37:09.435979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd180ec14', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:37:09.436026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447']}})  2026-04-06 05:37:09.560891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:37:09.560984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:37:09.561002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-06 05:37:09.561018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:37:09.561031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt', 'dm-uuid-CRYPT-LUKS2-0cb92a9095ac4932ba9885def0a3f871-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:37:09.561071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:37:09.561085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'uuids': ['0cb92a90-95ac-4932-ba98-85def0a3f871'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt']}})  2026-04-06 05:37:09.561160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742']}})  2026-04-06 05:37:09.561173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:37:09.561185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd99642af', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-06 05:37:09.561204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:37:09.561212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-06 05:37:09.561231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP', 'dm-uuid-CRYPT-LUKS2-22ded8c89142404ca572856e0a8f4fba-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-06 05:37:09.908644 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:09.908732 | orchestrator | 2026-04-06 05:37:09.908741 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-06 05:37:09.908749 | orchestrator | Monday 06 April 2026 05:37:09 +0000 (0:00:00.351) 0:29:39.394 ********** 2026-04-06 05:37:09.908759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.908769 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742', 'dm-uuid-LVM-Z6Gfl68NWHSIaTDLndMKbJ9g2vXxLKS7H7IVDVpTPXM3dDz207hlZrQACS13BMNP'], 'uuids': ['22ded8c8-9142-404c-a572-856e0a8f4fba'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.908775 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c', 'scsi-SQEMU_QEMU_HARDDISK_d180ec14-e159-4180-82cb-d01a3342930c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd180ec14', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.908807 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lROe02-FRbV-W78v-Dfl5-E5Bd-fAVM-rPPzrC', 'scsi-0QEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d', 'scsi-SQEMU_QEMU_HARDDISK_43e26771-fa08-421b-85bd-bea5ed7d9f4d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.908841 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.908849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.908856 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-06-01-39-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.908863 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.908875 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt', 'dm-uuid-CRYPT-LUKS2-0cb92a9095ac4932ba9885def0a3f871-WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.908882 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:09.908901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fcd584d6--c8ff--5eaf--81cc--26105cfb5447-osd--block--fcd584d6--c8ff--5eaf--81cc--26105cfb5447', 'dm-uuid-LVM-DDg0C3XoaiYrOzMcB0kfPfqzHg8E5JhRWG4AoOycNeM5Q2WICfjMBHF0YX2mqeJt'], 'uuids': ['0cb92a90-95ac-4932-ba98-85def0a3f871'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '43e26771', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['WG4AoO-ycNe-M5Q2-WICf-jMBH-F0YX-2mqeJt']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:13.403773 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5lLdRw-7tLp-t2wE-raTC-2xO3-NEEr-mCIRos', 'scsi-0QEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0', 'scsi-SQEMU_QEMU_HARDDISK_c3f554c9-cd3a-426a-b9ad-0bd91481d9b0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c3f554c9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d79f264--f564--5244--b3d4--1e30cd615742-osd--block--4d79f264--f564--5244--b3d4--1e30cd615742']}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:13.403882 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:13.403941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd99642af', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_d99642af-b055-4abf-9556-6a3108e513b8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:13.403977 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:13.403991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:13.404003 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP', 'dm-uuid-CRYPT-LUKS2-22ded8c89142404ca572856e0a8f4fba-H7IVDV-pTPX-M3dD-z207-hlZr-QACS-13BMNP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-06 05:37:13.404024 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:13.404038 | orchestrator | 2026-04-06 05:37:13.404050 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-06 05:37:13.404063 | orchestrator | Monday 06 April 2026 05:37:10 +0000 (0:00:00.458) 0:29:39.852 ********** 2026-04-06 05:37:13.404074 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:13.404086 | orchestrator | 2026-04-06 05:37:13.404097 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-06 05:37:13.404109 | orchestrator | Monday 06 April 2026 05:37:10 +0000 (0:00:00.473) 0:29:40.326 ********** 2026-04-06 05:37:13.404120 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:13.404131 | orchestrator | 2026-04-06 05:37:13.404142 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:37:13.404153 | orchestrator | Monday 06 April 2026 05:37:11 +0000 (0:00:00.508) 0:29:40.834 ********** 2026-04-06 05:37:13.404164 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:13.404174 | orchestrator | 2026-04-06 05:37:13.404185 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:37:13.404196 | orchestrator | Monday 06 April 2026 05:37:11 +0000 (0:00:00.519) 0:29:41.354 ********** 2026-04-06 05:37:13.404207 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:13.404218 | orchestrator | 2026-04-06 05:37:13.404229 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-06 05:37:13.404240 | orchestrator | Monday 06 April 2026 05:37:11 +0000 (0:00:00.142) 0:29:41.496 ********** 2026-04-06 05:37:13.404250 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:13.404261 | orchestrator | 2026-04-06 05:37:13.404272 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-06 05:37:13.404286 | orchestrator | Monday 06 April 2026 05:37:12 +0000 (0:00:00.292) 0:29:41.789 ********** 2026-04-06 05:37:13.404299 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:13.404311 | orchestrator | 2026-04-06 05:37:13.404324 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-06 05:37:13.404337 | orchestrator | Monday 06 April 2026 05:37:12 +0000 (0:00:00.176) 0:29:41.965 ********** 2026-04-06 05:37:13.404349 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-06 05:37:13.404362 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-06 05:37:13.404379 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-06 05:37:13.404423 | orchestrator | 2026-04-06 05:37:13.404436 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-06 05:37:13.404449 | orchestrator | Monday 06 April 2026 05:37:12 +0000 (0:00:00.732) 0:29:42.698 ********** 2026-04-06 05:37:13.404463 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-06 05:37:13.404476 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-06 05:37:13.404489 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-06 05:37:13.404500 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:13.404511 | orchestrator | 2026-04-06 05:37:13.404522 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-06 05:37:13.404533 | orchestrator | Monday 06 April 2026 05:37:13 +0000 (0:00:00.173) 0:29:42.871 ********** 2026-04-06 05:37:13.404544 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-06 05:37:13.404555 | orchestrator | 2026-04-06 05:37:13.404582 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:37:28.519173 | orchestrator | Monday 06 April 2026 05:37:13 +0000 (0:00:00.240) 0:29:43.111 ********** 2026-04-06 05:37:28.519286 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.519303 | orchestrator | 2026-04-06 05:37:28.519320 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:37:28.519339 | orchestrator | Monday 06 April 2026 05:37:13 +0000 (0:00:00.142) 0:29:43.254 ********** 2026-04-06 05:37:28.519358 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.519437 | orchestrator | 2026-04-06 05:37:28.519455 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:37:28.519473 | orchestrator | Monday 06 April 2026 05:37:13 +0000 (0:00:00.151) 0:29:43.406 ********** 2026-04-06 05:37:28.519491 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.519509 | orchestrator | 2026-04-06 05:37:28.519528 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:37:28.519545 | orchestrator | Monday 06 April 2026 05:37:13 +0000 (0:00:00.164) 0:29:43.570 ********** 2026-04-06 05:37:28.519563 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:28.519581 | orchestrator | 2026-04-06 05:37:28.519599 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:37:28.519617 | orchestrator | Monday 06 April 2026 05:37:14 +0000 (0:00:00.230) 0:29:43.801 ********** 2026-04-06 05:37:28.519635 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:37:28.519654 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:37:28.519671 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:37:28.519689 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.519707 | orchestrator | 2026-04-06 05:37:28.519726 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:37:28.519748 | orchestrator | Monday 06 April 2026 05:37:15 +0000 (0:00:01.096) 0:29:44.897 ********** 2026-04-06 05:37:28.519769 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:37:28.519789 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:37:28.519810 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:37:28.519830 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.519850 | orchestrator | 2026-04-06 05:37:28.519872 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:37:28.519892 | orchestrator | Monday 06 April 2026 05:37:15 +0000 (0:00:00.402) 0:29:45.300 ********** 2026-04-06 05:37:28.519913 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:37:28.519934 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:37:28.519955 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:37:28.519973 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.519993 | orchestrator | 2026-04-06 05:37:28.520015 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:37:28.520038 | orchestrator | Monday 06 April 2026 05:37:15 +0000 (0:00:00.414) 0:29:45.715 ********** 2026-04-06 05:37:28.520060 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:28.520081 | orchestrator | 2026-04-06 05:37:28.520102 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:37:28.520122 | orchestrator | Monday 06 April 2026 05:37:16 +0000 (0:00:00.168) 0:29:45.883 ********** 2026-04-06 05:37:28.520142 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-06 05:37:28.520162 | orchestrator | 2026-04-06 05:37:28.520182 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-06 05:37:28.520203 | orchestrator | Monday 06 April 2026 05:37:16 +0000 (0:00:00.342) 0:29:46.226 ********** 2026-04-06 05:37:28.520223 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:37:28.520244 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:37:28.520303 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:37:28.520323 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:37:28.520344 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:37:28.520407 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-06 05:37:28.520430 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:37:28.520449 | orchestrator | 2026-04-06 05:37:28.520468 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-06 05:37:28.520489 | orchestrator | Monday 06 April 2026 05:37:17 +0000 (0:00:00.829) 0:29:47.055 ********** 2026-04-06 05:37:28.520528 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-06 05:37:28.520548 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-06 05:37:28.520568 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-06 05:37:28.520588 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-06 05:37:28.520609 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-06 05:37:28.520629 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-06 05:37:28.520649 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-06 05:37:28.520667 | orchestrator | 2026-04-06 05:37:28.520685 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-06 05:37:28.520701 | orchestrator | Monday 06 April 2026 05:37:19 +0000 (0:00:01.729) 0:29:48.785 ********** 2026-04-06 05:37:28.520720 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:37:28.520740 | orchestrator | 2026-04-06 05:37:28.520784 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-06 05:37:28.520801 | orchestrator | Monday 06 April 2026 05:37:20 +0000 (0:00:01.254) 0:29:50.039 ********** 2026-04-06 05:37:28.520818 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:37:28.520834 | orchestrator | 2026-04-06 05:37:28.520850 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-06 05:37:28.520867 | orchestrator | Monday 06 April 2026 05:37:22 +0000 (0:00:01.867) 0:29:51.907 ********** 2026-04-06 05:37:28.520883 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:37:28.520899 | orchestrator | 2026-04-06 05:37:28.520914 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:37:28.520930 | orchestrator | Monday 06 April 2026 05:37:23 +0000 (0:00:01.215) 0:29:53.122 ********** 2026-04-06 05:37:28.520948 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-06 05:37:28.520964 | orchestrator | 2026-04-06 05:37:28.520980 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:37:28.520996 | orchestrator | Monday 06 April 2026 05:37:23 +0000 (0:00:00.191) 0:29:53.314 ********** 2026-04-06 05:37:28.521013 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-06 05:37:28.521030 | orchestrator | 2026-04-06 05:37:28.521046 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:37:28.521062 | orchestrator | Monday 06 April 2026 05:37:24 +0000 (0:00:00.533) 0:29:53.847 ********** 2026-04-06 05:37:28.521080 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.521096 | orchestrator | 2026-04-06 05:37:28.521113 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:37:28.521146 | orchestrator | Monday 06 April 2026 05:37:24 +0000 (0:00:00.134) 0:29:53.982 ********** 2026-04-06 05:37:28.521163 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:28.521180 | orchestrator | 2026-04-06 05:37:28.521196 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:37:28.521212 | orchestrator | Monday 06 April 2026 05:37:24 +0000 (0:00:00.515) 0:29:54.498 ********** 2026-04-06 05:37:28.521228 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:28.521245 | orchestrator | 2026-04-06 05:37:28.521261 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:37:28.521277 | orchestrator | Monday 06 April 2026 05:37:25 +0000 (0:00:00.530) 0:29:55.029 ********** 2026-04-06 05:37:28.521292 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:28.521308 | orchestrator | 2026-04-06 05:37:28.521324 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:37:28.521340 | orchestrator | Monday 06 April 2026 05:37:25 +0000 (0:00:00.532) 0:29:55.561 ********** 2026-04-06 05:37:28.521357 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.521411 | orchestrator | 2026-04-06 05:37:28.521428 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:37:28.521445 | orchestrator | Monday 06 April 2026 05:37:25 +0000 (0:00:00.130) 0:29:55.691 ********** 2026-04-06 05:37:28.521461 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.521478 | orchestrator | 2026-04-06 05:37:28.521493 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:37:28.521509 | orchestrator | Monday 06 April 2026 05:37:26 +0000 (0:00:00.130) 0:29:55.821 ********** 2026-04-06 05:37:28.521525 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.521541 | orchestrator | 2026-04-06 05:37:28.521557 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:37:28.521572 | orchestrator | Monday 06 April 2026 05:37:26 +0000 (0:00:00.138) 0:29:55.960 ********** 2026-04-06 05:37:28.521588 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:28.521604 | orchestrator | 2026-04-06 05:37:28.521619 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:37:28.521636 | orchestrator | Monday 06 April 2026 05:37:26 +0000 (0:00:00.526) 0:29:56.486 ********** 2026-04-06 05:37:28.521652 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:28.521668 | orchestrator | 2026-04-06 05:37:28.521684 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:37:28.521701 | orchestrator | Monday 06 April 2026 05:37:27 +0000 (0:00:00.514) 0:29:57.001 ********** 2026-04-06 05:37:28.521717 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.521733 | orchestrator | 2026-04-06 05:37:28.521749 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:37:28.521764 | orchestrator | Monday 06 April 2026 05:37:27 +0000 (0:00:00.127) 0:29:57.129 ********** 2026-04-06 05:37:28.521780 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.521796 | orchestrator | 2026-04-06 05:37:28.521824 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:37:28.521840 | orchestrator | Monday 06 April 2026 05:37:27 +0000 (0:00:00.144) 0:29:57.273 ********** 2026-04-06 05:37:28.521856 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:28.521873 | orchestrator | 2026-04-06 05:37:28.521889 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:37:28.521907 | orchestrator | Monday 06 April 2026 05:37:28 +0000 (0:00:00.509) 0:29:57.782 ********** 2026-04-06 05:37:28.521923 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:28.521938 | orchestrator | 2026-04-06 05:37:28.521953 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:37:28.521970 | orchestrator | Monday 06 April 2026 05:37:28 +0000 (0:00:00.146) 0:29:57.929 ********** 2026-04-06 05:37:28.521988 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:28.522004 | orchestrator | 2026-04-06 05:37:28.522110 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:37:28.522135 | orchestrator | Monday 06 April 2026 05:37:28 +0000 (0:00:00.151) 0:29:58.081 ********** 2026-04-06 05:37:28.522166 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:28.522183 | orchestrator | 2026-04-06 05:37:28.522215 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:37:40.067914 | orchestrator | Monday 06 April 2026 05:37:28 +0000 (0:00:00.148) 0:29:58.229 ********** 2026-04-06 05:37:40.068006 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068018 | orchestrator | 2026-04-06 05:37:40.068025 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:37:40.068032 | orchestrator | Monday 06 April 2026 05:37:28 +0000 (0:00:00.140) 0:29:58.370 ********** 2026-04-06 05:37:40.068038 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068044 | orchestrator | 2026-04-06 05:37:40.068051 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:37:40.068057 | orchestrator | Monday 06 April 2026 05:37:28 +0000 (0:00:00.135) 0:29:58.506 ********** 2026-04-06 05:37:40.068063 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:40.068071 | orchestrator | 2026-04-06 05:37:40.068078 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:37:40.068084 | orchestrator | Monday 06 April 2026 05:37:28 +0000 (0:00:00.190) 0:29:58.696 ********** 2026-04-06 05:37:40.068091 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:40.068097 | orchestrator | 2026-04-06 05:37:40.068103 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-06 05:37:40.068110 | orchestrator | Monday 06 April 2026 05:37:29 +0000 (0:00:00.219) 0:29:58.916 ********** 2026-04-06 05:37:40.068116 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068122 | orchestrator | 2026-04-06 05:37:40.068128 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-06 05:37:40.068135 | orchestrator | Monday 06 April 2026 05:37:29 +0000 (0:00:00.134) 0:29:59.051 ********** 2026-04-06 05:37:40.068140 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068146 | orchestrator | 2026-04-06 05:37:40.068152 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-06 05:37:40.068158 | orchestrator | Monday 06 April 2026 05:37:29 +0000 (0:00:00.134) 0:29:59.186 ********** 2026-04-06 05:37:40.068165 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068171 | orchestrator | 2026-04-06 05:37:40.068177 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-06 05:37:40.068184 | orchestrator | Monday 06 April 2026 05:37:29 +0000 (0:00:00.127) 0:29:59.314 ********** 2026-04-06 05:37:40.068191 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068197 | orchestrator | 2026-04-06 05:37:40.068203 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-06 05:37:40.068210 | orchestrator | Monday 06 April 2026 05:37:29 +0000 (0:00:00.129) 0:29:59.443 ********** 2026-04-06 05:37:40.068216 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068222 | orchestrator | 2026-04-06 05:37:40.068229 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-06 05:37:40.068235 | orchestrator | Monday 06 April 2026 05:37:29 +0000 (0:00:00.126) 0:29:59.570 ********** 2026-04-06 05:37:40.068241 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068248 | orchestrator | 2026-04-06 05:37:40.068254 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-06 05:37:40.068260 | orchestrator | Monday 06 April 2026 05:37:30 +0000 (0:00:00.474) 0:30:00.044 ********** 2026-04-06 05:37:40.068267 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068273 | orchestrator | 2026-04-06 05:37:40.068279 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-06 05:37:40.068286 | orchestrator | Monday 06 April 2026 05:37:30 +0000 (0:00:00.130) 0:30:00.175 ********** 2026-04-06 05:37:40.068292 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068299 | orchestrator | 2026-04-06 05:37:40.068305 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-06 05:37:40.068334 | orchestrator | Monday 06 April 2026 05:37:30 +0000 (0:00:00.128) 0:30:00.304 ********** 2026-04-06 05:37:40.068342 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068391 | orchestrator | 2026-04-06 05:37:40.068398 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-06 05:37:40.068405 | orchestrator | Monday 06 April 2026 05:37:30 +0000 (0:00:00.136) 0:30:00.440 ********** 2026-04-06 05:37:40.068411 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068417 | orchestrator | 2026-04-06 05:37:40.068424 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-06 05:37:40.068430 | orchestrator | Monday 06 April 2026 05:37:30 +0000 (0:00:00.125) 0:30:00.566 ********** 2026-04-06 05:37:40.068436 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068442 | orchestrator | 2026-04-06 05:37:40.068448 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-06 05:37:40.068454 | orchestrator | Monday 06 April 2026 05:37:30 +0000 (0:00:00.132) 0:30:00.698 ********** 2026-04-06 05:37:40.068460 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068466 | orchestrator | 2026-04-06 05:37:40.068473 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-06 05:37:40.068493 | orchestrator | Monday 06 April 2026 05:37:31 +0000 (0:00:00.196) 0:30:00.895 ********** 2026-04-06 05:37:40.068500 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:40.068507 | orchestrator | 2026-04-06 05:37:40.068513 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-06 05:37:40.068520 | orchestrator | Monday 06 April 2026 05:37:32 +0000 (0:00:00.938) 0:30:01.834 ********** 2026-04-06 05:37:40.068526 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:40.068533 | orchestrator | 2026-04-06 05:37:40.068540 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-06 05:37:40.068546 | orchestrator | Monday 06 April 2026 05:37:33 +0000 (0:00:01.230) 0:30:03.064 ********** 2026-04-06 05:37:40.068552 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-06 05:37:40.068560 | orchestrator | 2026-04-06 05:37:40.068566 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-06 05:37:40.068574 | orchestrator | Monday 06 April 2026 05:37:33 +0000 (0:00:00.220) 0:30:03.284 ********** 2026-04-06 05:37:40.068582 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068591 | orchestrator | 2026-04-06 05:37:40.068600 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-06 05:37:40.068622 | orchestrator | Monday 06 April 2026 05:37:33 +0000 (0:00:00.136) 0:30:03.421 ********** 2026-04-06 05:37:40.068629 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068635 | orchestrator | 2026-04-06 05:37:40.068641 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-06 05:37:40.068648 | orchestrator | Monday 06 April 2026 05:37:34 +0000 (0:00:00.472) 0:30:03.893 ********** 2026-04-06 05:37:40.068654 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-06 05:37:40.068660 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-06 05:37:40.068667 | orchestrator | 2026-04-06 05:37:40.068673 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-06 05:37:40.068679 | orchestrator | Monday 06 April 2026 05:37:35 +0000 (0:00:00.825) 0:30:04.719 ********** 2026-04-06 05:37:40.068685 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:40.068691 | orchestrator | 2026-04-06 05:37:40.068698 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-06 05:37:40.068704 | orchestrator | Monday 06 April 2026 05:37:35 +0000 (0:00:00.438) 0:30:05.157 ********** 2026-04-06 05:37:40.068710 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068717 | orchestrator | 2026-04-06 05:37:40.068723 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-06 05:37:40.068729 | orchestrator | Monday 06 April 2026 05:37:35 +0000 (0:00:00.142) 0:30:05.300 ********** 2026-04-06 05:37:40.068741 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068748 | orchestrator | 2026-04-06 05:37:40.068754 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-06 05:37:40.068760 | orchestrator | Monday 06 April 2026 05:37:35 +0000 (0:00:00.153) 0:30:05.454 ********** 2026-04-06 05:37:40.068766 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068772 | orchestrator | 2026-04-06 05:37:40.068779 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-06 05:37:40.068785 | orchestrator | Monday 06 April 2026 05:37:35 +0000 (0:00:00.142) 0:30:05.597 ********** 2026-04-06 05:37:40.068791 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-06 05:37:40.068797 | orchestrator | 2026-04-06 05:37:40.068803 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-06 05:37:40.068808 | orchestrator | Monday 06 April 2026 05:37:36 +0000 (0:00:00.204) 0:30:05.801 ********** 2026-04-06 05:37:40.068815 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:40.068821 | orchestrator | 2026-04-06 05:37:40.068827 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-06 05:37:40.068833 | orchestrator | Monday 06 April 2026 05:37:36 +0000 (0:00:00.739) 0:30:06.540 ********** 2026-04-06 05:37:40.068839 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-06 05:37:40.068845 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-06 05:37:40.068852 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-06 05:37:40.068858 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068864 | orchestrator | 2026-04-06 05:37:40.068871 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-06 05:37:40.068877 | orchestrator | Monday 06 April 2026 05:37:36 +0000 (0:00:00.152) 0:30:06.692 ********** 2026-04-06 05:37:40.068883 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068890 | orchestrator | 2026-04-06 05:37:40.068896 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-06 05:37:40.068902 | orchestrator | Monday 06 April 2026 05:37:37 +0000 (0:00:00.140) 0:30:06.833 ********** 2026-04-06 05:37:40.068908 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068915 | orchestrator | 2026-04-06 05:37:40.068921 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-06 05:37:40.068927 | orchestrator | Monday 06 April 2026 05:37:37 +0000 (0:00:00.173) 0:30:07.007 ********** 2026-04-06 05:37:40.068933 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068939 | orchestrator | 2026-04-06 05:37:40.068946 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-06 05:37:40.068952 | orchestrator | Monday 06 April 2026 05:37:37 +0000 (0:00:00.167) 0:30:07.175 ********** 2026-04-06 05:37:40.068958 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068964 | orchestrator | 2026-04-06 05:37:40.068970 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-06 05:37:40.068977 | orchestrator | Monday 06 April 2026 05:37:37 +0000 (0:00:00.453) 0:30:07.628 ********** 2026-04-06 05:37:40.068983 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.068989 | orchestrator | 2026-04-06 05:37:40.068995 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-06 05:37:40.069006 | orchestrator | Monday 06 April 2026 05:37:38 +0000 (0:00:00.167) 0:30:07.796 ********** 2026-04-06 05:37:40.069012 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:40.069018 | orchestrator | 2026-04-06 05:37:40.069024 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-06 05:37:40.069031 | orchestrator | Monday 06 April 2026 05:37:39 +0000 (0:00:01.442) 0:30:09.238 ********** 2026-04-06 05:37:40.069037 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:40.069043 | orchestrator | 2026-04-06 05:37:40.069050 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-06 05:37:40.069060 | orchestrator | Monday 06 April 2026 05:37:39 +0000 (0:00:00.140) 0:30:09.379 ********** 2026-04-06 05:37:40.069067 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-06 05:37:40.069073 | orchestrator | 2026-04-06 05:37:40.069080 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-06 05:37:40.069086 | orchestrator | Monday 06 April 2026 05:37:39 +0000 (0:00:00.216) 0:30:09.595 ********** 2026-04-06 05:37:40.069092 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:40.069099 | orchestrator | 2026-04-06 05:37:40.069105 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-06 05:37:40.069115 | orchestrator | Monday 06 April 2026 05:37:40 +0000 (0:00:00.176) 0:30:09.771 ********** 2026-04-06 05:37:58.782810 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.782909 | orchestrator | 2026-04-06 05:37:58.782920 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-06 05:37:58.782929 | orchestrator | Monday 06 April 2026 05:37:40 +0000 (0:00:00.154) 0:30:09.926 ********** 2026-04-06 05:37:58.782936 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.782944 | orchestrator | 2026-04-06 05:37:58.782951 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-06 05:37:58.782959 | orchestrator | Monday 06 April 2026 05:37:40 +0000 (0:00:00.142) 0:30:10.068 ********** 2026-04-06 05:37:58.782966 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.782974 | orchestrator | 2026-04-06 05:37:58.782981 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-06 05:37:58.782989 | orchestrator | Monday 06 April 2026 05:37:40 +0000 (0:00:00.154) 0:30:10.222 ********** 2026-04-06 05:37:58.782996 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783004 | orchestrator | 2026-04-06 05:37:58.783011 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-06 05:37:58.783018 | orchestrator | Monday 06 April 2026 05:37:40 +0000 (0:00:00.146) 0:30:10.369 ********** 2026-04-06 05:37:58.783026 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783033 | orchestrator | 2026-04-06 05:37:58.783040 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-06 05:37:58.783048 | orchestrator | Monday 06 April 2026 05:37:40 +0000 (0:00:00.191) 0:30:10.561 ********** 2026-04-06 05:37:58.783055 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783063 | orchestrator | 2026-04-06 05:37:58.783070 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-06 05:37:58.783078 | orchestrator | Monday 06 April 2026 05:37:40 +0000 (0:00:00.152) 0:30:10.713 ********** 2026-04-06 05:37:58.783085 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783092 | orchestrator | 2026-04-06 05:37:58.783100 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-06 05:37:58.783107 | orchestrator | Monday 06 April 2026 05:37:41 +0000 (0:00:00.477) 0:30:11.191 ********** 2026-04-06 05:37:58.783114 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:37:58.783123 | orchestrator | 2026-04-06 05:37:58.783130 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-06 05:37:58.783138 | orchestrator | Monday 06 April 2026 05:37:41 +0000 (0:00:00.236) 0:30:11.428 ********** 2026-04-06 05:37:58.783145 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-06 05:37:58.783154 | orchestrator | 2026-04-06 05:37:58.783161 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-06 05:37:58.783168 | orchestrator | Monday 06 April 2026 05:37:41 +0000 (0:00:00.192) 0:30:11.620 ********** 2026-04-06 05:37:58.783176 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-06 05:37:58.783183 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-06 05:37:58.783191 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-06 05:37:58.783198 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-06 05:37:58.783205 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-06 05:37:58.783234 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-06 05:37:58.783242 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-06 05:37:58.783250 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-06 05:37:58.783257 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-06 05:37:58.783264 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-06 05:37:58.783271 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-06 05:37:58.783279 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-06 05:37:58.783286 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-06 05:37:58.783293 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-06 05:37:58.783301 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-06 05:37:58.783308 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-06 05:37:58.783315 | orchestrator | 2026-04-06 05:37:58.783380 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-06 05:37:58.783394 | orchestrator | Monday 06 April 2026 05:37:47 +0000 (0:00:05.570) 0:30:17.191 ********** 2026-04-06 05:37:58.783405 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-06 05:37:58.783414 | orchestrator | 2026-04-06 05:37:58.783435 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-06 05:37:58.783442 | orchestrator | Monday 06 April 2026 05:37:47 +0000 (0:00:00.221) 0:30:17.412 ********** 2026-04-06 05:37:58.783450 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:37:58.783459 | orchestrator | 2026-04-06 05:37:58.783466 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-06 05:37:58.783473 | orchestrator | Monday 06 April 2026 05:37:48 +0000 (0:00:00.515) 0:30:17.928 ********** 2026-04-06 05:37:58.783481 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:37:58.783490 | orchestrator | 2026-04-06 05:37:58.783499 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-06 05:37:58.783507 | orchestrator | Monday 06 April 2026 05:37:49 +0000 (0:00:01.001) 0:30:18.930 ********** 2026-04-06 05:37:58.783516 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783525 | orchestrator | 2026-04-06 05:37:58.783533 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-06 05:37:58.783557 | orchestrator | Monday 06 April 2026 05:37:49 +0000 (0:00:00.137) 0:30:19.068 ********** 2026-04-06 05:37:58.783566 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783575 | orchestrator | 2026-04-06 05:37:58.783583 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-06 05:37:58.783592 | orchestrator | Monday 06 April 2026 05:37:49 +0000 (0:00:00.135) 0:30:19.203 ********** 2026-04-06 05:37:58.783600 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783609 | orchestrator | 2026-04-06 05:37:58.783617 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-06 05:37:58.783626 | orchestrator | Monday 06 April 2026 05:37:49 +0000 (0:00:00.138) 0:30:19.341 ********** 2026-04-06 05:37:58.783634 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783643 | orchestrator | 2026-04-06 05:37:58.783652 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-06 05:37:58.783660 | orchestrator | Monday 06 April 2026 05:37:50 +0000 (0:00:00.427) 0:30:19.769 ********** 2026-04-06 05:37:58.783669 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783677 | orchestrator | 2026-04-06 05:37:58.783686 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-06 05:37:58.783694 | orchestrator | Monday 06 April 2026 05:37:50 +0000 (0:00:00.137) 0:30:19.906 ********** 2026-04-06 05:37:58.783712 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783721 | orchestrator | 2026-04-06 05:37:58.783729 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-06 05:37:58.783738 | orchestrator | Monday 06 April 2026 05:37:50 +0000 (0:00:00.163) 0:30:20.070 ********** 2026-04-06 05:37:58.783746 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783755 | orchestrator | 2026-04-06 05:37:58.783764 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-06 05:37:58.783772 | orchestrator | Monday 06 April 2026 05:37:50 +0000 (0:00:00.159) 0:30:20.230 ********** 2026-04-06 05:37:58.783781 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783789 | orchestrator | 2026-04-06 05:37:58.783798 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-06 05:37:58.783806 | orchestrator | Monday 06 April 2026 05:37:50 +0000 (0:00:00.134) 0:30:20.365 ********** 2026-04-06 05:37:58.783815 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783823 | orchestrator | 2026-04-06 05:37:58.783832 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-06 05:37:58.783841 | orchestrator | Monday 06 April 2026 05:37:50 +0000 (0:00:00.148) 0:30:20.513 ********** 2026-04-06 05:37:58.783849 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783858 | orchestrator | 2026-04-06 05:37:58.783866 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-06 05:37:58.783875 | orchestrator | Monday 06 April 2026 05:37:50 +0000 (0:00:00.144) 0:30:20.658 ********** 2026-04-06 05:37:58.783883 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.783892 | orchestrator | 2026-04-06 05:37:58.783900 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-06 05:37:58.783909 | orchestrator | Monday 06 April 2026 05:37:51 +0000 (0:00:00.145) 0:30:20.803 ********** 2026-04-06 05:37:58.783917 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-06 05:37:58.783926 | orchestrator | 2026-04-06 05:37:58.783934 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-06 05:37:58.783943 | orchestrator | Monday 06 April 2026 05:37:54 +0000 (0:00:03.251) 0:30:24.055 ********** 2026-04-06 05:37:58.783951 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:37:58.783960 | orchestrator | 2026-04-06 05:37:58.783968 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-06 05:37:58.783977 | orchestrator | Monday 06 April 2026 05:37:54 +0000 (0:00:00.209) 0:30:24.264 ********** 2026-04-06 05:37:58.783988 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-06 05:37:58.784004 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-06 05:37:58.784014 | orchestrator | 2026-04-06 05:37:58.784023 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-06 05:37:58.784032 | orchestrator | Monday 06 April 2026 05:37:58 +0000 (0:00:03.798) 0:30:28.063 ********** 2026-04-06 05:37:58.784040 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.784049 | orchestrator | 2026-04-06 05:37:58.784057 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-06 05:37:58.784066 | orchestrator | Monday 06 April 2026 05:37:58 +0000 (0:00:00.138) 0:30:28.201 ********** 2026-04-06 05:37:58.784080 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.784089 | orchestrator | 2026-04-06 05:37:58.784098 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-06 05:37:58.784106 | orchestrator | Monday 06 April 2026 05:37:58 +0000 (0:00:00.139) 0:30:28.341 ********** 2026-04-06 05:37:58.784119 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:37:58.784128 | orchestrator | 2026-04-06 05:37:58.784137 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-06 05:37:58.784150 | orchestrator | Monday 06 April 2026 05:37:58 +0000 (0:00:00.148) 0:30:28.490 ********** 2026-04-06 05:38:47.875313 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:38:47.875419 | orchestrator | 2026-04-06 05:38:47.875433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-06 05:38:47.875443 | orchestrator | Monday 06 April 2026 05:37:59 +0000 (0:00:00.477) 0:30:28.967 ********** 2026-04-06 05:38:47.875453 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:38:47.875462 | orchestrator | 2026-04-06 05:38:47.875471 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-06 05:38:47.875480 | orchestrator | Monday 06 April 2026 05:37:59 +0000 (0:00:00.173) 0:30:29.141 ********** 2026-04-06 05:38:47.875489 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:38:47.875498 | orchestrator | 2026-04-06 05:38:47.875507 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-06 05:38:47.875516 | orchestrator | Monday 06 April 2026 05:37:59 +0000 (0:00:00.270) 0:30:29.412 ********** 2026-04-06 05:38:47.875525 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:38:47.875534 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:38:47.875543 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:38:47.875551 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:38:47.875560 | orchestrator | 2026-04-06 05:38:47.875569 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-06 05:38:47.875578 | orchestrator | Monday 06 April 2026 05:38:00 +0000 (0:00:00.432) 0:30:29.845 ********** 2026-04-06 05:38:47.875587 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:38:47.875596 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:38:47.875604 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:38:47.875613 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:38:47.875622 | orchestrator | 2026-04-06 05:38:47.875631 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-06 05:38:47.875640 | orchestrator | Monday 06 April 2026 05:38:00 +0000 (0:00:00.462) 0:30:30.308 ********** 2026-04-06 05:38:47.875649 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-06 05:38:47.875657 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-06 05:38:47.875666 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-06 05:38:47.875676 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:38:47.875685 | orchestrator | 2026-04-06 05:38:47.875694 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-06 05:38:47.875703 | orchestrator | Monday 06 April 2026 05:38:01 +0000 (0:00:00.429) 0:30:30.737 ********** 2026-04-06 05:38:47.875712 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:38:47.875720 | orchestrator | 2026-04-06 05:38:47.875729 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-06 05:38:47.875738 | orchestrator | Monday 06 April 2026 05:38:01 +0000 (0:00:00.188) 0:30:30.926 ********** 2026-04-06 05:38:47.875747 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-06 05:38:47.875756 | orchestrator | 2026-04-06 05:38:47.875765 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-06 05:38:47.875774 | orchestrator | Monday 06 April 2026 05:38:01 +0000 (0:00:00.452) 0:30:31.379 ********** 2026-04-06 05:38:47.875805 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:38:47.875814 | orchestrator | 2026-04-06 05:38:47.875823 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-06 05:38:47.875832 | orchestrator | Monday 06 April 2026 05:38:02 +0000 (0:00:00.813) 0:30:32.192 ********** 2026-04-06 05:38:47.875841 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-04-06 05:38:47.875849 | orchestrator | 2026-04-06 05:38:47.875858 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-06 05:38:47.875866 | orchestrator | Monday 06 April 2026 05:38:02 +0000 (0:00:00.216) 0:30:32.409 ********** 2026-04-06 05:38:47.875875 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:38:47.875884 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-06 05:38:47.875892 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:38:47.875901 | orchestrator | 2026-04-06 05:38:47.875910 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:38:47.875918 | orchestrator | Monday 06 April 2026 05:38:05 +0000 (0:00:02.901) 0:30:35.310 ********** 2026-04-06 05:38:47.875927 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-06 05:38:47.875935 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-06 05:38:47.875944 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:38:47.875952 | orchestrator | 2026-04-06 05:38:47.875974 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-06 05:38:47.875984 | orchestrator | Monday 06 April 2026 05:38:06 +0000 (0:00:00.934) 0:30:36.245 ********** 2026-04-06 05:38:47.875992 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:38:47.876001 | orchestrator | 2026-04-06 05:38:47.876010 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-06 05:38:47.876018 | orchestrator | Monday 06 April 2026 05:38:06 +0000 (0:00:00.125) 0:30:36.371 ********** 2026-04-06 05:38:47.876027 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-04-06 05:38:47.876036 | orchestrator | 2026-04-06 05:38:47.876045 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-06 05:38:47.876053 | orchestrator | Monday 06 April 2026 05:38:06 +0000 (0:00:00.203) 0:30:36.575 ********** 2026-04-06 05:38:47.876063 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:38:47.876073 | orchestrator | 2026-04-06 05:38:47.876082 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-06 05:38:47.876091 | orchestrator | Monday 06 April 2026 05:38:07 +0000 (0:00:00.613) 0:30:37.189 ********** 2026-04-06 05:38:47.876114 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:38:47.876124 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-06 05:38:47.876133 | orchestrator | 2026-04-06 05:38:47.876142 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-06 05:38:47.876151 | orchestrator | Monday 06 April 2026 05:38:11 +0000 (0:00:04.152) 0:30:41.341 ********** 2026-04-06 05:38:47.876160 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-06 05:38:47.876168 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-06 05:38:47.876177 | orchestrator | 2026-04-06 05:38:47.876186 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-06 05:38:47.876195 | orchestrator | Monday 06 April 2026 05:38:13 +0000 (0:00:02.026) 0:30:43.368 ********** 2026-04-06 05:38:47.876204 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-06 05:38:47.876212 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:38:47.876221 | orchestrator | 2026-04-06 05:38:47.876230 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-06 05:38:47.876239 | orchestrator | Monday 06 April 2026 05:38:14 +0000 (0:00:00.970) 0:30:44.338 ********** 2026-04-06 05:38:47.876254 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-04-06 05:38:47.876285 | orchestrator | 2026-04-06 05:38:47.876294 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-06 05:38:47.876303 | orchestrator | Monday 06 April 2026 05:38:14 +0000 (0:00:00.222) 0:30:44.561 ********** 2026-04-06 05:38:47.876312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:38:47.876321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:38:47.876330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:38:47.876339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:38:47.876348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:38:47.876356 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:38:47.876365 | orchestrator | 2026-04-06 05:38:47.876374 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-06 05:38:47.876383 | orchestrator | Monday 06 April 2026 05:38:15 +0000 (0:00:00.992) 0:30:45.553 ********** 2026-04-06 05:38:47.876391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:38:47.876400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:38:47.876409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:38:47.876418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:38:47.876427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-06 05:38:47.876435 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:38:47.876444 | orchestrator | 2026-04-06 05:38:47.876453 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-06 05:38:47.876461 | orchestrator | Monday 06 April 2026 05:38:16 +0000 (0:00:00.946) 0:30:46.500 ********** 2026-04-06 05:38:47.876470 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:38:47.876484 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:38:47.876493 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:38:47.876502 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:38:47.876513 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-06 05:38:47.876521 | orchestrator | 2026-04-06 05:38:47.876530 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-06 05:38:47.876539 | orchestrator | Monday 06 April 2026 05:38:47 +0000 (0:00:30.954) 0:31:17.454 ********** 2026-04-06 05:38:47.876548 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:38:47.876556 | orchestrator | 2026-04-06 05:38:47.876565 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-06 05:38:47.876585 | orchestrator | Monday 06 April 2026 05:38:47 +0000 (0:00:00.129) 0:31:17.583 ********** 2026-04-06 05:39:15.356802 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:15.356919 | orchestrator | 2026-04-06 05:39:15.356940 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-06 05:39:15.356956 | orchestrator | Monday 06 April 2026 05:38:47 +0000 (0:00:00.130) 0:31:17.713 ********** 2026-04-06 05:39:15.356971 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-04-06 05:39:15.356987 | orchestrator | 2026-04-06 05:39:15.357002 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-06 05:39:15.357013 | orchestrator | Monday 06 April 2026 05:38:48 +0000 (0:00:00.201) 0:31:17.915 ********** 2026-04-06 05:39:15.357023 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-04-06 05:39:15.357031 | orchestrator | 2026-04-06 05:39:15.357040 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-06 05:39:15.357049 | orchestrator | Monday 06 April 2026 05:38:48 +0000 (0:00:00.189) 0:31:18.105 ********** 2026-04-06 05:39:15.357058 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:15.357068 | orchestrator | 2026-04-06 05:39:15.357077 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-06 05:39:15.357086 | orchestrator | Monday 06 April 2026 05:38:49 +0000 (0:00:01.016) 0:31:19.121 ********** 2026-04-06 05:39:15.357094 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:15.357103 | orchestrator | 2026-04-06 05:39:15.357112 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-06 05:39:15.357121 | orchestrator | Monday 06 April 2026 05:38:50 +0000 (0:00:00.943) 0:31:20.064 ********** 2026-04-06 05:39:15.357129 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:15.357138 | orchestrator | 2026-04-06 05:39:15.357147 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-06 05:39:15.357156 | orchestrator | Monday 06 April 2026 05:38:51 +0000 (0:00:01.177) 0:31:21.242 ********** 2026-04-06 05:39:15.357165 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-06 05:39:15.357175 | orchestrator | 2026-04-06 05:39:15.357184 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-04-06 05:39:15.357193 | orchestrator | skipping: no hosts matched 2026-04-06 05:39:15.357201 | orchestrator | 2026-04-06 05:39:15.357210 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-04-06 05:39:15.357219 | orchestrator | skipping: no hosts matched 2026-04-06 05:39:15.357229 | orchestrator | 2026-04-06 05:39:15.357309 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-04-06 05:39:15.357324 | orchestrator | skipping: no hosts matched 2026-04-06 05:39:15.357338 | orchestrator | 2026-04-06 05:39:15.357353 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-04-06 05:39:15.357371 | orchestrator | 2026-04-06 05:39:15.357391 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-04-06 05:39:15.357405 | orchestrator | Monday 06 April 2026 05:38:54 +0000 (0:00:03.458) 0:31:24.701 ********** 2026-04-06 05:39:15.357419 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:39:15.357432 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:39:15.357444 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:39:15.357457 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:39:15.357471 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:39:15.357483 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:39:15.357496 | orchestrator | 2026-04-06 05:39:15.357509 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-04-06 05:39:15.357522 | orchestrator | Monday 06 April 2026 05:38:56 +0000 (0:00:01.797) 0:31:26.499 ********** 2026-04-06 05:39:15.357534 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:39:15.357547 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:39:15.357588 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:39:15.357602 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:39:15.357614 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:39:15.357627 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:39:15.357639 | orchestrator | 2026-04-06 05:39:15.357652 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:39:15.357664 | orchestrator | Monday 06 April 2026 05:38:59 +0000 (0:00:02.502) 0:31:29.002 ********** 2026-04-06 05:39:15.357676 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:15.357694 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:15.357713 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:15.357731 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:15.357749 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:15.357766 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:15.357783 | orchestrator | 2026-04-06 05:39:15.357800 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:39:15.357869 | orchestrator | Monday 06 April 2026 05:39:00 +0000 (0:00:00.969) 0:31:29.971 ********** 2026-04-06 05:39:15.357889 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:15.357900 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:15.357911 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:15.357922 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:15.357932 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:15.357943 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:15.357954 | orchestrator | 2026-04-06 05:39:15.357966 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-06 05:39:15.357985 | orchestrator | Monday 06 April 2026 05:39:01 +0000 (0:00:01.343) 0:31:31.314 ********** 2026-04-06 05:39:15.358005 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:39:15.358099 | orchestrator | 2026-04-06 05:39:15.358118 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-06 05:39:15.358137 | orchestrator | Monday 06 April 2026 05:39:02 +0000 (0:00:01.345) 0:31:32.660 ********** 2026-04-06 05:39:15.358155 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:39:15.358173 | orchestrator | 2026-04-06 05:39:15.358218 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-06 05:39:15.358358 | orchestrator | Monday 06 April 2026 05:39:04 +0000 (0:00:01.326) 0:31:33.987 ********** 2026-04-06 05:39:15.358383 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:15.358401 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:15.358418 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:15.358434 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:15.358449 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:15.358463 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:15.358477 | orchestrator | 2026-04-06 05:39:15.358492 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-06 05:39:15.358506 | orchestrator | Monday 06 April 2026 05:39:05 +0000 (0:00:00.749) 0:31:34.736 ********** 2026-04-06 05:39:15.358522 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:15.358537 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:15.358551 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:15.358565 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:15.358581 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:15.358596 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:15.358611 | orchestrator | 2026-04-06 05:39:15.358626 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-06 05:39:15.358643 | orchestrator | Monday 06 April 2026 05:39:06 +0000 (0:00:01.412) 0:31:36.148 ********** 2026-04-06 05:39:15.358659 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:15.358674 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:15.358708 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:15.358725 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:15.358742 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:15.358757 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:15.358773 | orchestrator | 2026-04-06 05:39:15.358790 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-06 05:39:15.358807 | orchestrator | Monday 06 April 2026 05:39:07 +0000 (0:00:01.020) 0:31:37.169 ********** 2026-04-06 05:39:15.358823 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:15.358838 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:15.358855 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:15.358867 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:15.358877 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:15.358886 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:15.358897 | orchestrator | 2026-04-06 05:39:15.358914 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-06 05:39:15.358930 | orchestrator | Monday 06 April 2026 05:39:08 +0000 (0:00:01.353) 0:31:38.522 ********** 2026-04-06 05:39:15.358945 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:15.358961 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:15.358976 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:15.358993 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:15.359010 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:15.359026 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:15.359043 | orchestrator | 2026-04-06 05:39:15.359061 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-06 05:39:15.359078 | orchestrator | Monday 06 April 2026 05:39:09 +0000 (0:00:00.763) 0:31:39.286 ********** 2026-04-06 05:39:15.359095 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:15.359105 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:15.359114 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:15.359124 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:15.359134 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:15.359143 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:15.359153 | orchestrator | 2026-04-06 05:39:15.359165 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-06 05:39:15.359182 | orchestrator | Monday 06 April 2026 05:39:10 +0000 (0:00:00.976) 0:31:40.262 ********** 2026-04-06 05:39:15.359197 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:15.359213 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:15.359229 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:15.359272 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:15.359290 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:15.359306 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:15.359323 | orchestrator | 2026-04-06 05:39:15.359338 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-06 05:39:15.359353 | orchestrator | Monday 06 April 2026 05:39:11 +0000 (0:00:00.693) 0:31:40.955 ********** 2026-04-06 05:39:15.359369 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:15.359386 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:15.359402 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:15.359418 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:15.359435 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:15.359450 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:15.359468 | orchestrator | 2026-04-06 05:39:15.359484 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-06 05:39:15.359499 | orchestrator | Monday 06 April 2026 05:39:12 +0000 (0:00:01.345) 0:31:42.301 ********** 2026-04-06 05:39:15.359524 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:15.359535 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:15.359545 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:15.359554 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:15.359564 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:15.359573 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:15.359583 | orchestrator | 2026-04-06 05:39:15.359603 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-06 05:39:15.359613 | orchestrator | Monday 06 April 2026 05:39:13 +0000 (0:00:01.094) 0:31:43.396 ********** 2026-04-06 05:39:15.359622 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:15.359632 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:15.359642 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:15.359651 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:15.359660 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:15.359670 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:15.359680 | orchestrator | 2026-04-06 05:39:15.359689 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-06 05:39:15.359699 | orchestrator | Monday 06 April 2026 05:39:14 +0000 (0:00:00.655) 0:31:44.051 ********** 2026-04-06 05:39:15.359708 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:15.359718 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:15.359727 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:15.359737 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:15.359746 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:15.359756 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:15.359765 | orchestrator | 2026-04-06 05:39:15.359788 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-06 05:39:47.780047 | orchestrator | Monday 06 April 2026 05:39:15 +0000 (0:00:01.019) 0:31:45.071 ********** 2026-04-06 05:39:47.780159 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.780175 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:47.780187 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:47.780257 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:47.780271 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:47.780282 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:47.780293 | orchestrator | 2026-04-06 05:39:47.780305 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-06 05:39:47.780317 | orchestrator | Monday 06 April 2026 05:39:16 +0000 (0:00:00.678) 0:31:45.750 ********** 2026-04-06 05:39:47.780328 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.780339 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:47.780350 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:47.780361 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:47.780372 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:47.780383 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:47.780393 | orchestrator | 2026-04-06 05:39:47.780405 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-06 05:39:47.780416 | orchestrator | Monday 06 April 2026 05:39:17 +0000 (0:00:00.972) 0:31:46.723 ********** 2026-04-06 05:39:47.780426 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.780437 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:47.780448 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:47.780459 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:47.780470 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:47.780481 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:47.780492 | orchestrator | 2026-04-06 05:39:47.780502 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-06 05:39:47.780513 | orchestrator | Monday 06 April 2026 05:39:17 +0000 (0:00:00.693) 0:31:47.417 ********** 2026-04-06 05:39:47.780524 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.780535 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:47.780546 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:47.780557 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:47.780568 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:47.780579 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:47.780591 | orchestrator | 2026-04-06 05:39:47.780606 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-06 05:39:47.780618 | orchestrator | Monday 06 April 2026 05:39:18 +0000 (0:00:00.962) 0:31:48.379 ********** 2026-04-06 05:39:47.780631 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.780670 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:47.780683 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:47.780696 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:47.780708 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:47.780720 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:47.780733 | orchestrator | 2026-04-06 05:39:47.780746 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-06 05:39:47.780758 | orchestrator | Monday 06 April 2026 05:39:19 +0000 (0:00:00.643) 0:31:49.022 ********** 2026-04-06 05:39:47.780771 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.780783 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:47.780796 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:47.780808 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:47.780821 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:47.780833 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:47.780846 | orchestrator | 2026-04-06 05:39:47.780859 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-06 05:39:47.780872 | orchestrator | Monday 06 April 2026 05:39:20 +0000 (0:00:00.976) 0:31:49.998 ********** 2026-04-06 05:39:47.780884 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.780897 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:47.780910 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:47.780923 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:47.780936 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:47.780949 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:47.780959 | orchestrator | 2026-04-06 05:39:47.780971 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-06 05:39:47.780982 | orchestrator | Monday 06 April 2026 05:39:20 +0000 (0:00:00.714) 0:31:50.712 ********** 2026-04-06 05:39:47.780992 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.781003 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:47.781014 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:47.781024 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:47.781035 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:47.781045 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:47.781056 | orchestrator | 2026-04-06 05:39:47.781067 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-06 05:39:47.781091 | orchestrator | Monday 06 April 2026 05:39:22 +0000 (0:00:01.426) 0:31:52.139 ********** 2026-04-06 05:39:47.781103 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.781113 | orchestrator | 2026-04-06 05:39:47.781124 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-06 05:39:47.781135 | orchestrator | Monday 06 April 2026 05:39:24 +0000 (0:00:02.070) 0:31:54.209 ********** 2026-04-06 05:39:47.781146 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.781156 | orchestrator | 2026-04-06 05:39:47.781167 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-06 05:39:47.781178 | orchestrator | Monday 06 April 2026 05:39:26 +0000 (0:00:02.065) 0:31:56.275 ********** 2026-04-06 05:39:47.781188 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.781215 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:47.781226 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:47.781237 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:47.781248 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:47.781258 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:47.781269 | orchestrator | 2026-04-06 05:39:47.781280 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-06 05:39:47.781290 | orchestrator | Monday 06 April 2026 05:39:28 +0000 (0:00:01.751) 0:31:58.026 ********** 2026-04-06 05:39:47.781301 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.781312 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:47.781323 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:47.781333 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:47.781344 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:47.781354 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:47.781373 | orchestrator | 2026-04-06 05:39:47.781384 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-06 05:39:47.781413 | orchestrator | Monday 06 April 2026 05:39:29 +0000 (0:00:01.022) 0:31:59.048 ********** 2026-04-06 05:39:47.781425 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:39:47.781437 | orchestrator | 2026-04-06 05:39:47.781448 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-06 05:39:47.781459 | orchestrator | Monday 06 April 2026 05:39:31 +0000 (0:00:01.722) 0:32:00.771 ********** 2026-04-06 05:39:47.781470 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.781480 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:47.781491 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:47.781501 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:39:47.781512 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:39:47.781523 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:39:47.781533 | orchestrator | 2026-04-06 05:39:47.781544 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-06 05:39:47.781555 | orchestrator | Monday 06 April 2026 05:39:32 +0000 (0:00:01.552) 0:32:02.323 ********** 2026-04-06 05:39:47.781566 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:39:47.781577 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:39:47.781587 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:39:47.781598 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:39:47.781609 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:39:47.781619 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:39:47.781630 | orchestrator | 2026-04-06 05:39:47.781641 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-04-06 05:39:47.781651 | orchestrator | 2026-04-06 05:39:47.781662 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:39:47.781673 | orchestrator | Monday 06 April 2026 05:39:37 +0000 (0:00:04.738) 0:32:07.062 ********** 2026-04-06 05:39:47.781684 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.781694 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:47.781705 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:47.781716 | orchestrator | 2026-04-06 05:39:47.781727 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:39:47.781737 | orchestrator | Monday 06 April 2026 05:39:38 +0000 (0:00:00.687) 0:32:07.749 ********** 2026-04-06 05:39:47.781748 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.781759 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:39:47.781770 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:39:47.781780 | orchestrator | 2026-04-06 05:39:47.781791 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-06 05:39:47.781803 | orchestrator | Monday 06 April 2026 05:39:38 +0000 (0:00:00.592) 0:32:08.342 ********** 2026-04-06 05:39:47.781813 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:39:47.781824 | orchestrator | 2026-04-06 05:39:47.781835 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-06 05:39:47.781846 | orchestrator | Monday 06 April 2026 05:39:39 +0000 (0:00:01.257) 0:32:09.599 ********** 2026-04-06 05:39:47.781856 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.781867 | orchestrator | 2026-04-06 05:39:47.781878 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-04-06 05:39:47.781888 | orchestrator | 2026-04-06 05:39:47.781899 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-04-06 05:39:47.781910 | orchestrator | Monday 06 April 2026 05:39:41 +0000 (0:00:01.532) 0:32:11.132 ********** 2026-04-06 05:39:47.781921 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.781931 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:47.781942 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:47.781953 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:47.781964 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:47.781974 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:47.781992 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:39:47.782002 | orchestrator | 2026-04-06 05:39:47.782013 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:39:47.782085 | orchestrator | Monday 06 April 2026 05:39:42 +0000 (0:00:00.765) 0:32:11.897 ********** 2026-04-06 05:39:47.782096 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.782107 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:47.782118 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:47.782129 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:47.782140 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:47.782151 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:47.782161 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:39:47.782172 | orchestrator | 2026-04-06 05:39:47.782183 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-06 05:39:47.782216 | orchestrator | Monday 06 April 2026 05:39:44 +0000 (0:00:01.884) 0:32:13.782 ********** 2026-04-06 05:39:47.782228 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.782239 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:47.782250 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:47.782260 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:47.782271 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:47.782282 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:47.782293 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:39:47.782303 | orchestrator | 2026-04-06 05:39:47.782314 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-06 05:39:47.782325 | orchestrator | Monday 06 April 2026 05:39:45 +0000 (0:00:01.696) 0:32:15.479 ********** 2026-04-06 05:39:47.782336 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.782346 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:47.782357 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:47.782367 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:39:47.782378 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:39:47.782388 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:39:47.782399 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:39:47.782410 | orchestrator | 2026-04-06 05:39:47.782421 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-04-06 05:39:47.782432 | orchestrator | Monday 06 April 2026 05:39:47 +0000 (0:00:01.591) 0:32:17.070 ********** 2026-04-06 05:39:47.782442 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:39:47.782453 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:39:47.782464 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:39:47.782482 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:40:06.513893 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:40:06.513988 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:40:06.514003 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514097 | orchestrator | 2026-04-06 05:40:06.514130 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-04-06 05:40:06.514151 | orchestrator | 2026-04-06 05:40:06.514170 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-04-06 05:40:06.514255 | orchestrator | Monday 06 April 2026 05:39:49 +0000 (0:00:02.125) 0:32:19.196 ********** 2026-04-06 05:40:06.514269 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-04-06 05:40:06.514281 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-04-06 05:40:06.514292 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-04-06 05:40:06.514303 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514314 | orchestrator | 2026-04-06 05:40:06.514325 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-06 05:40:06.514336 | orchestrator | Monday 06 April 2026 05:39:49 +0000 (0:00:00.166) 0:32:19.362 ********** 2026-04-06 05:40:06.514347 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514357 | orchestrator | 2026-04-06 05:40:06.514368 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-06 05:40:06.514417 | orchestrator | Monday 06 April 2026 05:39:49 +0000 (0:00:00.159) 0:32:19.521 ********** 2026-04-06 05:40:06.514437 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514451 | orchestrator | 2026-04-06 05:40:06.514464 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-06 05:40:06.514477 | orchestrator | Monday 06 April 2026 05:39:49 +0000 (0:00:00.151) 0:32:19.673 ********** 2026-04-06 05:40:06.514490 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514503 | orchestrator | 2026-04-06 05:40:06.514517 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-06 05:40:06.514529 | orchestrator | Monday 06 April 2026 05:39:50 +0000 (0:00:00.196) 0:32:19.870 ********** 2026-04-06 05:40:06.514543 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514555 | orchestrator | 2026-04-06 05:40:06.514568 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-04-06 05:40:06.514581 | orchestrator | Monday 06 April 2026 05:39:50 +0000 (0:00:00.608) 0:32:20.479 ********** 2026-04-06 05:40:06.514594 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-04-06 05:40:06.514607 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-04-06 05:40:06.514619 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514630 | orchestrator | 2026-04-06 05:40:06.514641 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-04-06 05:40:06.514652 | orchestrator | Monday 06 April 2026 05:39:50 +0000 (0:00:00.185) 0:32:20.665 ********** 2026-04-06 05:40:06.514663 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514674 | orchestrator | 2026-04-06 05:40:06.514685 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-04-06 05:40:06.514696 | orchestrator | Monday 06 April 2026 05:39:51 +0000 (0:00:00.162) 0:32:20.827 ********** 2026-04-06 05:40:06.514707 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514718 | orchestrator | 2026-04-06 05:40:06.514729 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-04-06 05:40:06.514740 | orchestrator | Monday 06 April 2026 05:39:51 +0000 (0:00:00.144) 0:32:20.972 ********** 2026-04-06 05:40:06.514751 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514762 | orchestrator | 2026-04-06 05:40:06.514773 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-04-06 05:40:06.514784 | orchestrator | Monday 06 April 2026 05:39:51 +0000 (0:00:00.161) 0:32:21.134 ********** 2026-04-06 05:40:06.514794 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-04-06 05:40:06.514805 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-04-06 05:40:06.514816 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514827 | orchestrator | 2026-04-06 05:40:06.514838 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-04-06 05:40:06.514849 | orchestrator | Monday 06 April 2026 05:39:51 +0000 (0:00:00.195) 0:32:21.330 ********** 2026-04-06 05:40:06.514860 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514870 | orchestrator | 2026-04-06 05:40:06.514881 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-04-06 05:40:06.514892 | orchestrator | Monday 06 April 2026 05:39:51 +0000 (0:00:00.159) 0:32:21.489 ********** 2026-04-06 05:40:06.514916 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514927 | orchestrator | 2026-04-06 05:40:06.514938 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-04-06 05:40:06.514949 | orchestrator | Monday 06 April 2026 05:39:52 +0000 (0:00:00.626) 0:32:22.116 ********** 2026-04-06 05:40:06.514960 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.514971 | orchestrator | 2026-04-06 05:40:06.514982 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-04-06 05:40:06.514993 | orchestrator | Monday 06 April 2026 05:39:52 +0000 (0:00:00.153) 0:32:22.270 ********** 2026-04-06 05:40:06.515012 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:06.515023 | orchestrator | 2026-04-06 05:40:06.515034 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-04-06 05:40:06.515045 | orchestrator | 2026-04-06 05:40:06.515056 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-06 05:40:06.515066 | orchestrator | Monday 06 April 2026 05:39:53 +0000 (0:00:00.875) 0:32:23.145 ********** 2026-04-06 05:40:06.515077 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:40:06.515088 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:40:06.515099 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:40:06.515110 | orchestrator | 2026-04-06 05:40:06.515121 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-06 05:40:06.515132 | orchestrator | Monday 06 April 2026 05:39:53 +0000 (0:00:00.541) 0:32:23.687 ********** 2026-04-06 05:40:06.515143 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:40:06.515154 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:40:06.515202 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:40:06.515214 | orchestrator | 2026-04-06 05:40:06.515225 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-06 05:40:06.515236 | orchestrator | Monday 06 April 2026 05:39:54 +0000 (0:00:00.663) 0:32:24.351 ********** 2026-04-06 05:40:06.515247 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:40:06.515258 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:40:06.515269 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:40:06.515279 | orchestrator | 2026-04-06 05:40:06.515290 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-06 05:40:06.515301 | orchestrator | Monday 06 April 2026 05:39:54 +0000 (0:00:00.309) 0:32:24.660 ********** 2026-04-06 05:40:06.515312 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:40:06.515323 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:40:06.515334 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:40:06.515344 | orchestrator | 2026-04-06 05:40:06.515355 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-06 05:40:06.515366 | orchestrator | Monday 06 April 2026 05:39:55 +0000 (0:00:00.340) 0:32:25.001 ********** 2026-04-06 05:40:06.515377 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:40:06.515388 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:40:06.515398 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:40:06.515409 | orchestrator | 2026-04-06 05:40:06.515420 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-04-06 05:40:06.515431 | orchestrator | Monday 06 April 2026 05:39:56 +0000 (0:00:00.837) 0:32:25.838 ********** 2026-04-06 05:40:06.515442 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:40:06.515452 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:40:06.515463 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:40:06.515474 | orchestrator | 2026-04-06 05:40:06.515485 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-04-06 05:40:06.515495 | orchestrator | Monday 06 April 2026 05:39:56 +0000 (0:00:00.347) 0:32:26.185 ********** 2026-04-06 05:40:06.515506 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:40:06.515517 | orchestrator | 2026-04-06 05:40:06.515528 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-04-06 05:40:06.515538 | orchestrator | 2026-04-06 05:40:06.515549 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-06 05:40:06.515560 | orchestrator | Monday 06 April 2026 05:39:57 +0000 (0:00:00.824) 0:32:27.009 ********** 2026-04-06 05:40:06.515571 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:40:06.515582 | orchestrator | 2026-04-06 05:40:06.515592 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-06 05:40:06.515603 | orchestrator | Monday 06 April 2026 05:39:57 +0000 (0:00:00.480) 0:32:27.490 ********** 2026-04-06 05:40:06.515614 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:40:06.515625 | orchestrator | 2026-04-06 05:40:06.515635 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-04-06 05:40:06.515653 | orchestrator | Monday 06 April 2026 05:39:57 +0000 (0:00:00.218) 0:32:27.709 ********** 2026-04-06 05:40:06.515664 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:40:06.515675 | orchestrator | 2026-04-06 05:40:06.515686 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-04-06 05:40:06.515697 | orchestrator | Monday 06 April 2026 05:39:58 +0000 (0:00:00.470) 0:32:28.179 ********** 2026-04-06 05:40:06.515708 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:40:06.515718 | orchestrator | 2026-04-06 05:40:06.515729 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-04-06 05:40:06.515740 | orchestrator | Monday 06 April 2026 05:40:00 +0000 (0:00:02.162) 0:32:30.342 ********** 2026-04-06 05:40:06.515751 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:40:06.515762 | orchestrator | 2026-04-06 05:40:06.515773 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-04-06 05:40:06.515784 | orchestrator | Monday 06 April 2026 05:40:02 +0000 (0:00:01.873) 0:32:32.215 ********** 2026-04-06 05:40:06.515794 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:40:06.515805 | orchestrator | 2026-04-06 05:40:06.515816 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-04-06 05:40:06.515827 | orchestrator | 2026-04-06 05:40:06.515838 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-04-06 05:40:06.515848 | orchestrator | Monday 06 April 2026 05:40:03 +0000 (0:00:01.116) 0:32:33.332 ********** 2026-04-06 05:40:06.515859 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:40:06.515870 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:40:06.515881 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:40:06.515892 | orchestrator | 2026-04-06 05:40:06.515902 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-04-06 05:40:06.515918 | orchestrator | Monday 06 April 2026 05:40:04 +0000 (0:00:00.432) 0:32:33.764 ********** 2026-04-06 05:40:06.515929 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:40:06.515940 | orchestrator | 2026-04-06 05:40:06.515951 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-04-06 05:40:06.515962 | orchestrator | Monday 06 April 2026 05:40:05 +0000 (0:00:01.309) 0:32:35.074 ********** 2026-04-06 05:40:06.515972 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:40:06.515983 | orchestrator | 2026-04-06 05:40:06.515994 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 05:40:06.516006 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 05:40:06.516018 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-04-06 05:40:06.516030 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-04-06 05:40:06.516040 | orchestrator | testbed-node-1 : ok=191  changed=16  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-04-06 05:40:06.516058 | orchestrator | testbed-node-2 : ok=196  changed=15  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-04-06 05:40:08.409577 | orchestrator | testbed-node-3 : ok=317  changed=21  unreachable=0 failed=0 skipped=362  rescued=0 ignored=0 2026-04-06 05:40:08.409666 | orchestrator | testbed-node-4 : ok=307  changed=18  unreachable=0 failed=0 skipped=359  rescued=0 ignored=0 2026-04-06 05:40:08.409681 | orchestrator | testbed-node-5 : ok=303  changed=18  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-04-06 05:40:08.409693 | orchestrator | 2026-04-06 05:40:08.409705 | orchestrator | 2026-04-06 05:40:08.409716 | orchestrator | 2026-04-06 05:40:08.409727 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 05:40:08.409763 | orchestrator | Monday 06 April 2026 05:40:07 +0000 (0:00:02.567) 0:32:37.642 ********** 2026-04-06 05:40:08.409774 | orchestrator | =============================================================================== 2026-04-06 05:40:08.409785 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 76.10s 2026-04-06 05:40:08.409796 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 75.58s 2026-04-06 05:40:08.409807 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 43.45s 2026-04-06 05:40:08.409818 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.46s 2026-04-06 05:40:08.409828 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.95s 2026-04-06 05:40:08.409839 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.62s 2026-04-06 05:40:08.409850 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.46s 2026-04-06 05:40:08.409861 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 27.04s 2026-04-06 05:40:08.409872 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 25.71s 2026-04-06 05:40:08.409883 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.89s 2026-04-06 05:40:08.409894 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.83s 2026-04-06 05:40:08.409904 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 20.53s 2026-04-06 05:40:08.409915 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.80s 2026-04-06 05:40:08.409926 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.76s 2026-04-06 05:40:08.409937 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 11.45s 2026-04-06 05:40:08.409947 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 10.92s 2026-04-06 05:40:08.409958 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 10.41s 2026-04-06 05:40:08.409969 | orchestrator | Stop standby ceph mds -------------------------------------------------- 10.29s 2026-04-06 05:40:08.409980 | orchestrator | Stop ceph osd ----------------------------------------------------------- 9.59s 2026-04-06 05:40:08.409991 | orchestrator | Set cluster configs ----------------------------------------------------- 9.42s 2026-04-06 05:40:08.537610 | orchestrator | + osism apply cephclient 2026-04-06 05:40:09.745577 | orchestrator | 2026-04-06 05:40:09 | INFO  | Prepare task for execution of cephclient. 2026-04-06 05:40:09.807144 | orchestrator | 2026-04-06 05:40:09 | INFO  | Task 8b0fefd3-9bb8-44b3-abe4-30c33d0c5a51 (cephclient) was prepared for execution. 2026-04-06 05:40:09.807282 | orchestrator | 2026-04-06 05:40:09 | INFO  | It takes a moment until task 8b0fefd3-9bb8-44b3-abe4-30c33d0c5a51 (cephclient) has been started and output is visible here. 2026-04-06 05:40:37.424684 | orchestrator | 2026-04-06 05:40:37.424816 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-06 05:40:37.424834 | orchestrator | 2026-04-06 05:40:37.424845 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-06 05:40:37.424873 | orchestrator | Monday 06 April 2026 05:40:15 +0000 (0:00:02.110) 0:00:02.110 ********** 2026-04-06 05:40:37.424885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-06 05:40:37.424896 | orchestrator | 2026-04-06 05:40:37.424906 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-06 05:40:37.424916 | orchestrator | Monday 06 April 2026 05:40:17 +0000 (0:00:01.858) 0:00:03.969 ********** 2026-04-06 05:40:37.424932 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-06 05:40:37.424949 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-06 05:40:37.424965 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-06 05:40:37.425013 | orchestrator | 2026-04-06 05:40:37.425031 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-06 05:40:37.425045 | orchestrator | Monday 06 April 2026 05:40:20 +0000 (0:00:02.625) 0:00:06.595 ********** 2026-04-06 05:40:37.425061 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-06 05:40:37.425077 | orchestrator | 2026-04-06 05:40:37.425096 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-06 05:40:37.425112 | orchestrator | Monday 06 April 2026 05:40:22 +0000 (0:00:02.048) 0:00:08.643 ********** 2026-04-06 05:40:37.425128 | orchestrator | ok: [testbed-manager] 2026-04-06 05:40:37.425139 | orchestrator | 2026-04-06 05:40:37.425148 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-06 05:40:37.425193 | orchestrator | Monday 06 April 2026 05:40:23 +0000 (0:00:01.917) 0:00:10.561 ********** 2026-04-06 05:40:37.425204 | orchestrator | ok: [testbed-manager] 2026-04-06 05:40:37.425216 | orchestrator | 2026-04-06 05:40:37.425227 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-06 05:40:37.425239 | orchestrator | Monday 06 April 2026 05:40:25 +0000 (0:00:01.876) 0:00:12.438 ********** 2026-04-06 05:40:37.425250 | orchestrator | ok: [testbed-manager] 2026-04-06 05:40:37.425261 | orchestrator | 2026-04-06 05:40:37.425273 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-06 05:40:37.425284 | orchestrator | Monday 06 April 2026 05:40:28 +0000 (0:00:02.270) 0:00:14.708 ********** 2026-04-06 05:40:37.425295 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-06 05:40:37.425307 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-04-06 05:40:37.425318 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-06 05:40:37.425329 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-06 05:40:37.425340 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-06 05:40:37.425357 | orchestrator | 2026-04-06 05:40:37.425373 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-06 05:40:37.425389 | orchestrator | Monday 06 April 2026 05:40:32 +0000 (0:00:04.789) 0:00:19.498 ********** 2026-04-06 05:40:37.425404 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-06 05:40:37.425420 | orchestrator | 2026-04-06 05:40:37.425437 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-06 05:40:37.425453 | orchestrator | Monday 06 April 2026 05:40:34 +0000 (0:00:01.454) 0:00:20.952 ********** 2026-04-06 05:40:37.425471 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:37.425483 | orchestrator | 2026-04-06 05:40:37.425494 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-06 05:40:37.425505 | orchestrator | Monday 06 April 2026 05:40:35 +0000 (0:00:01.218) 0:00:22.171 ********** 2026-04-06 05:40:37.425516 | orchestrator | skipping: [testbed-manager] 2026-04-06 05:40:37.425527 | orchestrator | 2026-04-06 05:40:37.425538 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 05:40:37.425550 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 05:40:37.425562 | orchestrator | 2026-04-06 05:40:37.425575 | orchestrator | 2026-04-06 05:40:37.425591 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 05:40:37.425607 | orchestrator | Monday 06 April 2026 05:40:37 +0000 (0:00:01.494) 0:00:23.665 ********** 2026-04-06 05:40:37.425626 | orchestrator | =============================================================================== 2026-04-06 05:40:37.425644 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.79s 2026-04-06 05:40:37.425661 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.63s 2026-04-06 05:40:37.425677 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.27s 2026-04-06 05:40:37.425693 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.05s 2026-04-06 05:40:37.425724 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.92s 2026-04-06 05:40:37.425742 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.88s 2026-04-06 05:40:37.425753 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.86s 2026-04-06 05:40:37.425763 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.49s 2026-04-06 05:40:37.425773 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.45s 2026-04-06 05:40:37.425783 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.22s 2026-04-06 05:40:37.631602 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-06 05:40:37.631703 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-04-06 05:40:37.640175 | orchestrator | + set -e 2026-04-06 05:40:37.640262 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 05:40:37.640278 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 05:40:37.640290 | orchestrator | ++ INTERACTIVE=false 2026-04-06 05:40:37.640301 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 05:40:37.640312 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 05:40:37.640323 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 05:40:37.640334 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 05:40:37.640345 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 05:40:37.640373 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 05:40:37.640384 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 05:40:37.640395 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 05:40:37.640406 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 05:40:37.640417 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 05:40:37.640429 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 05:40:37.640440 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 05:40:37.640451 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 05:40:37.640462 | orchestrator | ++ export ARA=false 2026-04-06 05:40:37.640473 | orchestrator | ++ ARA=false 2026-04-06 05:40:37.640484 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 05:40:37.640495 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 05:40:37.640506 | orchestrator | ++ export TEMPEST=false 2026-04-06 05:40:37.640517 | orchestrator | ++ TEMPEST=false 2026-04-06 05:40:37.640528 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 05:40:37.640539 | orchestrator | ++ IS_ZUUL=true 2026-04-06 05:40:37.640550 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 05:40:37.640561 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 05:40:37.640572 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 05:40:37.640583 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 05:40:37.640594 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 05:40:37.640605 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 05:40:37.640616 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 05:40:37.640627 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 05:40:37.640639 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 05:40:37.640650 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 05:40:37.640661 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-06 05:40:37.640672 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-06 05:40:37.640683 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-06 05:40:37.641116 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-06 05:40:37.646537 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-06 05:40:37.646583 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-06 05:40:37.646595 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-06 05:40:37.646606 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-04-06 05:40:46.507544 | orchestrator | 2026-04-06 05:40:46 | ERROR  | Unable to get ansible vault password 2026-04-06 05:40:46.507618 | orchestrator | 2026-04-06 05:40:46 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-06 05:40:46.507625 | orchestrator | 2026-04-06 05:40:46 | ERROR  | Dropping encrypted entries 2026-04-06 05:40:46.543846 | orchestrator | 2026-04-06 05:40:46 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-06 05:40:46.544869 | orchestrator | 2026-04-06 05:40:46 | INFO  | Kolla configuration check passed 2026-04-06 05:40:46.720662 | orchestrator | 2026-04-06 05:40:46 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-04-06 05:40:46.738211 | orchestrator | 2026-04-06 05:40:46 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-04-06 05:40:46.985606 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-06 05:40:53.268643 | orchestrator | 2026-04-06 05:40:53 | ERROR  | Unable to get ansible vault password 2026-04-06 05:40:53.268751 | orchestrator | 2026-04-06 05:40:53 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-06 05:40:53.268767 | orchestrator | 2026-04-06 05:40:53 | ERROR  | Dropping encrypted entries 2026-04-06 05:40:53.301771 | orchestrator | 2026-04-06 05:40:53 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-06 05:40:53.470621 | orchestrator | 2026-04-06 05:40:53 | INFO  | Found 206 classic queue(s) in vhost '/': 2026-04-06 05:40:53.470795 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-04-06 05:40:53.470816 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-04-06 05:40:53.470841 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-04-06 05:40:53.471063 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-04-06 05:40:53.471232 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - barbican.workers_fanout_4b0ceb6861d44a2aaee96c7d9f96eea7 (vhost: /, messages: 0) 2026-04-06 05:40:53.471467 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - barbican.workers_fanout_d74bdd9e69484f5bb853704613a82935 (vhost: /, messages: 0) 2026-04-06 05:40:53.471834 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - barbican.workers_fanout_dba43a0bf60049bb98dc4e73ca687fce (vhost: /, messages: 0) 2026-04-06 05:40:53.471854 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-04-06 05:40:53.472330 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - central (vhost: /, messages: 1) 2026-04-06 05:40:53.472350 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.472608 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.472822 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.473034 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - central_fanout_5e634e47b0254d269eab3c4ea5cece45 (vhost: /, messages: 0) 2026-04-06 05:40:53.473386 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - central_fanout_9ff81e7645b142ac8e3d2673833f9aab (vhost: /, messages: 0) 2026-04-06 05:40:53.473640 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - central_fanout_b64802bbbf1e4125b54f6ab4b6d47267 (vhost: /, messages: 0) 2026-04-06 05:40:53.473658 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - central_fanout_f16898539dc8441d9b450f8cf58ee549 (vhost: /, messages: 0) 2026-04-06 05:40:53.473923 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-04-06 05:40:53.474305 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.474342 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.474482 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.474775 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-backup_fanout_4dba8e82d77a40eb8031d42b2d075e57 (vhost: /, messages: 0) 2026-04-06 05:40:53.474791 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-backup_fanout_ba0ec3526624441ca9424b1d60ca5e1f (vhost: /, messages: 0) 2026-04-06 05:40:53.475203 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-backup_fanout_e23d8914d34947f9bcdbcb3fc1d75267 (vhost: /, messages: 0) 2026-04-06 05:40:53.475221 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-04-06 05:40:53.475498 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.475772 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.477086 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.477104 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-scheduler_fanout_032bbe23719040dfa526df55b945b81f (vhost: /, messages: 0) 2026-04-06 05:40:53.477113 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-scheduler_fanout_0d4ed14cfa304182847370a35693bc52 (vhost: /, messages: 0) 2026-04-06 05:40:53.477121 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-scheduler_fanout_82e67d1013094786954a0d491ee1b630 (vhost: /, messages: 0) 2026-04-06 05:40:53.477130 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-04-06 05:40:53.477152 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-04-06 05:40:53.477160 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.477168 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_8a7e2d230d5641c884ce9f1ca2520a80 (vhost: /, messages: 0) 2026-04-06 05:40:53.477177 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-04-06 05:40:53.477185 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.477193 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_cc60cf9b9ce847278f4b385e2e6f355d (vhost: /, messages: 0) 2026-04-06 05:40:53.477201 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-04-06 05:40:53.477209 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.477275 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_9920ac2edeca45e6b9e25120bdbfc520 (vhost: /, messages: 0) 2026-04-06 05:40:53.477287 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume_fanout_057dbbec3a4c47d49b2058c66d067adb (vhost: /, messages: 0) 2026-04-06 05:40:53.477295 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume_fanout_399feba9e40f4c65bd07f000f5993962 (vhost: /, messages: 0) 2026-04-06 05:40:53.477495 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - cinder-volume_fanout_a7b4f5016bd945e8a8719fe7fcc4d3cc (vhost: /, messages: 0) 2026-04-06 05:40:53.477646 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - compute (vhost: /, messages: 0) 2026-04-06 05:40:53.477810 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-04-06 05:40:53.478119 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-04-06 05:40:53.478231 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-04-06 05:40:53.478249 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - compute_fanout_0849b3a283054c3e9a2f1be01f083582 (vhost: /, messages: 0) 2026-04-06 05:40:53.478522 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - compute_fanout_c6a563ec9f204a3fb3d086bf3052b526 (vhost: /, messages: 0) 2026-04-06 05:40:53.478865 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - compute_fanout_d2ccaaf757c6468badd10d05535b3128 (vhost: /, messages: 0) 2026-04-06 05:40:53.478879 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - conductor (vhost: /, messages: 0) 2026-04-06 05:40:53.478890 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.479073 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.479194 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.479474 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - conductor_fanout_14d80f524ec04d09bce8205c8cf7d69f (vhost: /, messages: 0) 2026-04-06 05:40:53.479600 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - conductor_fanout_63f4e3719a1c4a6b8cac765d62aeb075 (vhost: /, messages: 0) 2026-04-06 05:40:53.479802 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - conductor_fanout_88dbf35c0c0c47b29308a62b9eed54b9 (vhost: /, messages: 0) 2026-04-06 05:40:53.479815 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - conductor_fanout_9251267235784a1b874e45d7df90dee8 (vhost: /, messages: 0) 2026-04-06 05:40:53.480251 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - conductor_fanout_b00a9d72eb244b19ac056950439f1273 (vhost: /, messages: 0) 2026-04-06 05:40:53.480266 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - conductor_fanout_e6e586e921314be2be458e6068e9252f (vhost: /, messages: 0) 2026-04-06 05:40:53.480432 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - event.sample (vhost: /, messages: 5) 2026-04-06 05:40:53.480589 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-06 05:40:53.480895 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor.cdopvc2wtmwa (vhost: /, messages: 0) 2026-04-06 05:40:53.481095 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor.hyvpxyptbmqb (vhost: /, messages: 0) 2026-04-06 05:40:53.481118 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor.zymsmiuafytu (vhost: /, messages: 0) 2026-04-06 05:40:53.481297 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor_fanout_47cc7db69c0a44e6b4a557a70b3ff099 (vhost: /, messages: 0) 2026-04-06 05:40:53.481495 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor_fanout_4c054414bcc54bd48f9686b389903201 (vhost: /, messages: 0) 2026-04-06 05:40:53.481672 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor_fanout_9f11861ca63f4951abc14e0976135662 (vhost: /, messages: 0) 2026-04-06 05:40:53.481919 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor_fanout_a9bda3b9ea7648e69bf55296f8ba80dd (vhost: /, messages: 0) 2026-04-06 05:40:53.482113 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor_fanout_b116af4aa976406289281abc1c25e974 (vhost: /, messages: 0) 2026-04-06 05:40:53.482288 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor_fanout_b7a82fba044d44ca86922e21db8f08e1 (vhost: /, messages: 0) 2026-04-06 05:40:53.482401 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor_fanout_d11d8229005a4a02a89bb0d3b335f827 (vhost: /, messages: 0) 2026-04-06 05:40:53.482438 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor_fanout_d62fe757ae6148479fcad4fcbc3dd7d2 (vhost: /, messages: 0) 2026-04-06 05:40:53.484084 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - magnum-conductor_fanout_f75dacc656e14a6fa8f33dd0802dafc6 (vhost: /, messages: 0) 2026-04-06 05:40:53.484100 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-04-06 05:40:53.484107 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.484114 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.484128 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.484136 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-data_fanout_1ca9e37ba0644230be8244053f3d04b5 (vhost: /, messages: 0) 2026-04-06 05:40:53.484156 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-data_fanout_96566a0176f0409fbad98cac79bb4467 (vhost: /, messages: 0) 2026-04-06 05:40:53.484163 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-data_fanout_dc06679d72024847896566b75bae7dfb (vhost: /, messages: 0) 2026-04-06 05:40:53.484169 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-04-06 05:40:53.484176 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.484183 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.484189 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.484196 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-scheduler_fanout_722d41214eef4549aa0187248e9a8847 (vhost: /, messages: 0) 2026-04-06 05:40:53.484203 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-scheduler_fanout_a985398961794acf8ed2d5b302caa87a (vhost: /, messages: 0) 2026-04-06 05:40:53.484210 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-scheduler_fanout_e579d1d327064d34a4428d3881544d58 (vhost: /, messages: 0) 2026-04-06 05:40:53.484264 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-04-06 05:40:53.484274 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-04-06 05:40:53.484281 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-04-06 05:40:53.484287 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-04-06 05:40:53.484294 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-share_fanout_02e138ef57dc4cad844f197973ad801a (vhost: /, messages: 0) 2026-04-06 05:40:53.484301 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-share_fanout_0451adc48901463d9f5c57d59d15d785 (vhost: /, messages: 0) 2026-04-06 05:40:53.484307 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - manila-share_fanout_58f7c1b614e644a89852e82d0a3f93de (vhost: /, messages: 0) 2026-04-06 05:40:53.484314 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-04-06 05:40:53.484324 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-04-06 05:40:53.484331 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-04-06 05:40:53.484579 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-04-06 05:40:53.484596 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-04-06 05:40:53.484793 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-04-06 05:40:53.485043 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-04-06 05:40:53.485056 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-04-06 05:40:53.485238 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.485251 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.485436 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.485586 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - octavia_provisioning_v2_fanout_16bd896eb2f0497db17f359fca655cd7 (vhost: /, messages: 0) 2026-04-06 05:40:53.485673 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - octavia_provisioning_v2_fanout_39fd110bd34a4b1ea4b56a7adafea6ec (vhost: /, messages: 0) 2026-04-06 05:40:53.486231 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - octavia_provisioning_v2_fanout_ad6d2550fccd44e09a2b58313deace3e (vhost: /, messages: 0) 2026-04-06 05:40:53.486252 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - producer (vhost: /, messages: 0) 2026-04-06 05:40:53.486267 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.486274 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.486281 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.486605 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - producer_fanout_1e1ec29c93dd429fbfce14530190b826 (vhost: /, messages: 0) 2026-04-06 05:40:53.486619 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - producer_fanout_1e93b44812e64817852c760c4f6c519e (vhost: /, messages: 0) 2026-04-06 05:40:53.486795 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - producer_fanout_3e2654ca7dda41339d8beafd12c64f00 (vhost: /, messages: 0) 2026-04-06 05:40:53.486912 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - producer_fanout_af2b883e305444cdab9a6d2c2efbafe7 (vhost: /, messages: 0) 2026-04-06 05:40:53.486923 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - producer_fanout_e814c57ff9484833b35a43b7e566cc83 (vhost: /, messages: 0) 2026-04-06 05:40:53.487312 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - producer_fanout_f01beabb3aed46b48c9a55ce4162c5b3 (vhost: /, messages: 0) 2026-04-06 05:40:53.487489 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-04-06 05:40:53.487738 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.488071 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.488094 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.488315 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin_fanout_06f818fc21824eaa93fb097a3e03b227 (vhost: /, messages: 0) 2026-04-06 05:40:53.488328 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin_fanout_290b024d57d946a5a3c3fb45ba968e61 (vhost: /, messages: 0) 2026-04-06 05:40:53.488689 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin_fanout_34051814468f4872aaed558436c479a6 (vhost: /, messages: 0) 2026-04-06 05:40:53.488712 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin_fanout_66d1882c4ff0421b9db2392ff8ce18e5 (vhost: /, messages: 0) 2026-04-06 05:40:53.488934 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin_fanout_6dbd8a90bb094e0b972149928b004d34 (vhost: /, messages: 0) 2026-04-06 05:40:53.488945 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin_fanout_7bc576f355c340d5bfab7a1c0d35675c (vhost: /, messages: 0) 2026-04-06 05:40:53.489129 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin_fanout_7ea948e7c9a449068c5804946d298ce4 (vhost: /, messages: 0) 2026-04-06 05:40:53.489480 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin_fanout_c48ba9d12fa648dbaefd065cedede542 (vhost: /, messages: 0) 2026-04-06 05:40:53.489492 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-plugin_fanout_cbdde6f0c6cc4c4dab9c121fed9a5645 (vhost: /, messages: 0) 2026-04-06 05:40:53.489717 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-04-06 05:40:53.489829 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.490080 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.490092 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.490257 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_0ff49cbe72cf425aa670e72e1d96f27a (vhost: /, messages: 0) 2026-04-06 05:40:53.490325 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_64bf38305a594c3299fedc37acaa816e (vhost: /, messages: 0) 2026-04-06 05:40:53.490716 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_67a27d69e098478eaf1d04bed3d7b32a (vhost: /, messages: 0) 2026-04-06 05:40:53.490728 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_702d8d550e7b4bb0871b79c12b2302c8 (vhost: /, messages: 0) 2026-04-06 05:40:53.490735 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_7cd9bcb8b8a544638826361da11bb499 (vhost: /, messages: 0) 2026-04-06 05:40:53.490817 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_7d2f46df98384aca81f84b3d2452e3fd (vhost: /, messages: 0) 2026-04-06 05:40:53.490954 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_98d8f53f1da2451f875726008c2990f2 (vhost: /, messages: 0) 2026-04-06 05:40:53.491103 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_a08dcbca4bec4368beab7c611bd255c7 (vhost: /, messages: 0) 2026-04-06 05:40:53.491216 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_a2fc515859ab419f90fe491b3cca6a83 (vhost: /, messages: 0) 2026-04-06 05:40:53.491754 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_bfc97bb36ce048388d5a93187998b6cb (vhost: /, messages: 0) 2026-04-06 05:40:53.491768 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_c5833c5fbb564d72bdd08a9b036e0830 (vhost: /, messages: 0) 2026-04-06 05:40:53.491775 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_cc04bf0c67014edeb22ce4139dfb09c1 (vhost: /, messages: 0) 2026-04-06 05:40:53.491785 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_de5046e6f7aa441b9ae14007089b0ed7 (vhost: /, messages: 0) 2026-04-06 05:40:53.491796 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_df2a8f0731cf43d4ab05b3c9c8dd1563 (vhost: /, messages: 0) 2026-04-06 05:40:53.491818 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_ee3416f78ed347fa8be54417b18ef7c4 (vhost: /, messages: 0) 2026-04-06 05:40:53.491954 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_f6bebeae40d2467784890e0ca4ee42a2 (vhost: /, messages: 0) 2026-04-06 05:40:53.491966 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_f81eb1405ba44a08825c5138cdbf0b4f (vhost: /, messages: 0) 2026-04-06 05:40:53.492186 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-reports-plugin_fanout_f8acfc9d5821460db766305f77943b17 (vhost: /, messages: 0) 2026-04-06 05:40:53.492355 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-04-06 05:40:53.492370 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.492660 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.492671 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.492868 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions_fanout_16f9fdf296284deaaeda92ead2ce0d00 (vhost: /, messages: 0) 2026-04-06 05:40:53.493000 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions_fanout_28680bdbc54e41a6b42cb00eb7c74023 (vhost: /, messages: 0) 2026-04-06 05:40:53.493293 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions_fanout_5231e3f16c4744dea6640fd7b875acf2 (vhost: /, messages: 0) 2026-04-06 05:40:53.493314 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions_fanout_666a4afe192a41d8b419d25dfae0dfa0 (vhost: /, messages: 0) 2026-04-06 05:40:53.493324 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions_fanout_8173d823b86944b1a87fb11b4b4c7152 (vhost: /, messages: 0) 2026-04-06 05:40:53.493573 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions_fanout_a51a5014bc2d4a98b037ab138be12c43 (vhost: /, messages: 0) 2026-04-06 05:40:53.493751 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions_fanout_c5a17b48de814366b4760794160dabfd (vhost: /, messages: 0) 2026-04-06 05:40:53.493894 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions_fanout_d1c76d92162b4861b6a8bc65f1463a2d (vhost: /, messages: 0) 2026-04-06 05:40:53.494916 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - q-server-resource-versions_fanout_e063753c62af452bb7bf95e86ef54ec9 (vhost: /, messages: 0) 2026-04-06 05:40:53.494946 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_008afd28d9664b5c85ff7a1cd9b8cbe5 (vhost: /, messages: 0) 2026-04-06 05:40:53.494957 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_15147918694740b5b003bf2ef0d26dcd (vhost: /, messages: 0) 2026-04-06 05:40:53.494968 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_35682515941540e88b99e1b62cd98118 (vhost: /, messages: 0) 2026-04-06 05:40:53.494979 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_4575de27bcd242e58497404b64a373a5 (vhost: /, messages: 0) 2026-04-06 05:40:53.494999 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_45aa96e5a36e42d994ca48f8f496d790 (vhost: /, messages: 0) 2026-04-06 05:40:53.495010 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_4652eca12261406293924ac1ee00fa15 (vhost: /, messages: 0) 2026-04-06 05:40:53.495320 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_4c0596f8be0f456d94c14e93c4fbaaae (vhost: /, messages: 0) 2026-04-06 05:40:53.495347 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_540a7fd733ff43f8b958d50b65d7e39f (vhost: /, messages: 0) 2026-04-06 05:40:53.495372 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_5570928fc8e94f02bf7f4417ada3afb4 (vhost: /, messages: 0) 2026-04-06 05:40:53.496647 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_6572dcc868cf4f0aa12730807843eed8 (vhost: /, messages: 0) 2026-04-06 05:40:53.496761 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_67d9ebc349b3442b99141479d75b1616 (vhost: /, messages: 0) 2026-04-06 05:40:53.496777 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_9ea6d5684f93484cb7e337a9a581af75 (vhost: /, messages: 0) 2026-04-06 05:40:53.496796 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_a1c7c9dd515946678b97d611327da7b2 (vhost: /, messages: 0) 2026-04-06 05:40:53.496806 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_ad85ce300141453d8085f878f1d10105 (vhost: /, messages: 0) 2026-04-06 05:40:53.496908 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_d187808885d44e0e9d1c8850bcfc560f (vhost: /, messages: 0) 2026-04-06 05:40:53.497080 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_d8374e2e5e0b456888092b7938713472 (vhost: /, messages: 0) 2026-04-06 05:40:53.497254 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_d84ff637bd9444b595ecf1e39baa5bbc (vhost: /, messages: 0) 2026-04-06 05:40:53.497344 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_de46d991d18a408b9ee99f0cff6b3ef2 (vhost: /, messages: 0) 2026-04-06 05:40:53.497519 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - reply_f2a4c05ee22e4919b52fdefeb00ec7d5 (vhost: /, messages: 1) 2026-04-06 05:40:53.497727 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-04-06 05:40:53.497930 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.498080 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.498286 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.498372 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - scheduler_fanout_352cf02188b741d083d561b83b61ff33 (vhost: /, messages: 0) 2026-04-06 05:40:53.498609 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - scheduler_fanout_6a14253641014361b0bcc082f7740e84 (vhost: /, messages: 0) 2026-04-06 05:40:53.498763 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - scheduler_fanout_81a8900008d3433c90b5d9ce6a66b029 (vhost: /, messages: 0) 2026-04-06 05:40:53.498942 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - scheduler_fanout_8d85a9e0f82842aeb1bc9d3423264bde (vhost: /, messages: 0) 2026-04-06 05:40:53.499164 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - scheduler_fanout_abe34cf427f4401b8cea3f7e183c7010 (vhost: /, messages: 0) 2026-04-06 05:40:53.499266 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - scheduler_fanout_f4c9991313e94c5dbd4c83147892616d (vhost: /, messages: 0) 2026-04-06 05:40:53.499384 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - worker (vhost: /, messages: 0) 2026-04-06 05:40:53.499862 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-04-06 05:40:53.499889 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-04-06 05:40:53.499900 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-04-06 05:40:53.500102 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - worker_fanout_57e7a5b17c3944c2941c1cc9b14e9b69 (vhost: /, messages: 0) 2026-04-06 05:40:53.500120 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - worker_fanout_5c8e381e9869471f892d3586dfd420c6 (vhost: /, messages: 0) 2026-04-06 05:40:53.500437 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - worker_fanout_6c534516e00d43b397b901c97429e317 (vhost: /, messages: 0) 2026-04-06 05:40:53.500543 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - worker_fanout_975bcb0093644f58aed043de5f7d79cd (vhost: /, messages: 0) 2026-04-06 05:40:53.500627 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - worker_fanout_9d0bb9a36fae46939faca2ce4f5c677d (vhost: /, messages: 0) 2026-04-06 05:40:53.500915 | orchestrator | 2026-04-06 05:40:53 | INFO  |  - worker_fanout_f9759c7e8268430dafef6561ce71a1fe (vhost: /, messages: 0) 2026-04-06 05:40:53.769196 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-06 05:41:00.155659 | orchestrator | 2026-04-06 05:41:00 | ERROR  | Unable to get ansible vault password 2026-04-06 05:41:00.155852 | orchestrator | 2026-04-06 05:41:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-06 05:41:00.155885 | orchestrator | 2026-04-06 05:41:00 | ERROR  | Dropping encrypted entries 2026-04-06 05:41:00.190257 | orchestrator | 2026-04-06 05:41:00 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-06 05:41:00.216452 | orchestrator | 2026-04-06 05:41:00 | INFO  | Found 46 exchange(s) in vhost '/': 2026-04-06 05:41:00.216515 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - aodh (type: topic, transient) 2026-04-06 05:41:00.216530 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - barbican.workers_fanout (type: fanout, transient) 2026-04-06 05:41:00.216544 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - ceilometer (type: topic, transient) 2026-04-06 05:41:00.216556 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - central_fanout (type: fanout, transient) 2026-04-06 05:41:00.216642 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - cinder (type: topic, transient) 2026-04-06 05:41:00.216658 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - cinder-backup_fanout (type: fanout, transient) 2026-04-06 05:41:00.216678 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - cinder-scheduler_fanout (type: fanout, transient) 2026-04-06 05:41:00.216690 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout (type: fanout, transient) 2026-04-06 05:41:00.216901 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout (type: fanout, transient) 2026-04-06 05:41:00.217086 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout (type: fanout, transient) 2026-04-06 05:41:00.217288 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - cinder-volume_fanout (type: fanout, transient) 2026-04-06 05:41:00.217726 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - compute_fanout (type: fanout, transient) 2026-04-06 05:41:00.217747 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - conductor_fanout (type: fanout, transient) 2026-04-06 05:41:00.217759 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - designate (type: topic, transient) 2026-04-06 05:41:00.217882 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - dns (type: topic, transient) 2026-04-06 05:41:00.218223 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - glance (type: topic, transient) 2026-04-06 05:41:00.218244 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - heat (type: topic, transient) 2026-04-06 05:41:00.218374 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - ironic (type: topic, transient) 2026-04-06 05:41:00.218727 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - keystone (type: topic, transient) 2026-04-06 05:41:00.218854 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - l3_agent_fanout (type: fanout, transient) 2026-04-06 05:41:00.219099 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - magnum (type: topic, transient) 2026-04-06 05:41:00.219361 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - magnum-conductor_fanout (type: fanout, transient) 2026-04-06 05:41:00.219746 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - manila-data_fanout (type: fanout, transient) 2026-04-06 05:41:00.219881 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - manila-scheduler_fanout (type: fanout, transient) 2026-04-06 05:41:00.220238 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - manila-share_fanout (type: fanout, transient) 2026-04-06 05:41:00.222183 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - neutron (type: topic, transient) 2026-04-06 05:41:00.222208 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - neutron-vo-Network-1.1_fanout (type: fanout, transient) 2026-04-06 05:41:00.222221 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - neutron-vo-Port-1.10_fanout (type: fanout, transient) 2026-04-06 05:41:00.222232 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - neutron-vo-SecurityGroup-1.6_fanout (type: fanout, transient) 2026-04-06 05:41:00.222260 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - neutron-vo-SecurityGroupRule-1.3_fanout (type: fanout, transient) 2026-04-06 05:41:00.222272 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - neutron-vo-Subnet-1.2_fanout (type: fanout, transient) 2026-04-06 05:41:00.222283 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - nova (type: topic, transient) 2026-04-06 05:41:00.222294 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - octavia (type: topic, transient) 2026-04-06 05:41:00.222306 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - octavia_provisioning_v2_fanout (type: fanout, transient) 2026-04-06 05:41:00.222317 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - openstack (type: topic, transient) 2026-04-06 05:41:00.222328 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - producer_fanout (type: fanout, transient) 2026-04-06 05:41:00.222339 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - q-agent-notifier-port-update_fanout (type: fanout, transient) 2026-04-06 05:41:00.222350 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - q-agent-notifier-security_group-update_fanout (type: fanout, transient) 2026-04-06 05:41:00.222361 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - q-plugin_fanout (type: fanout, transient) 2026-04-06 05:41:00.222372 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - q-reports-plugin_fanout (type: fanout, transient) 2026-04-06 05:41:00.222384 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - q-server-resource-versions_fanout (type: fanout, transient) 2026-04-06 05:41:00.222394 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - scheduler_fanout (type: fanout, transient) 2026-04-06 05:41:00.222405 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - swift (type: topic, transient) 2026-04-06 05:41:00.222416 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - trove (type: topic, transient) 2026-04-06 05:41:00.222428 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - worker_fanout (type: fanout, transient) 2026-04-06 05:41:00.222439 | orchestrator | 2026-04-06 05:41:00 | INFO  |  - zaqar (type: topic, transient) 2026-04-06 05:41:00.481444 | orchestrator | + osism apply -a upgrade keystone 2026-04-06 05:41:01.761724 | orchestrator | 2026-04-06 05:41:01 | INFO  | Prepare task for execution of keystone. 2026-04-06 05:41:01.826305 | orchestrator | 2026-04-06 05:41:01 | INFO  | Task 871ed269-8758-4a83-b99c-e50a5d2cb3ea (keystone) was prepared for execution. 2026-04-06 05:41:01.826430 | orchestrator | 2026-04-06 05:41:01 | INFO  | It takes a moment until task 871ed269-8758-4a83-b99c-e50a5d2cb3ea (keystone) has been started and output is visible here. 2026-04-06 05:41:13.115818 | orchestrator | 2026-04-06 05:41:13.115956 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 05:41:13.115983 | orchestrator | 2026-04-06 05:41:13.116005 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 05:41:13.116025 | orchestrator | Monday 06 April 2026 05:41:06 +0000 (0:00:01.672) 0:00:01.673 ********** 2026-04-06 05:41:13.116045 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:41:13.116064 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:41:13.116082 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:41:13.116101 | orchestrator | 2026-04-06 05:41:13.116120 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 05:41:13.116186 | orchestrator | Monday 06 April 2026 05:41:08 +0000 (0:00:01.710) 0:00:03.383 ********** 2026-04-06 05:41:13.116205 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-06 05:41:13.116223 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-06 05:41:13.116243 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-06 05:41:13.116262 | orchestrator | 2026-04-06 05:41:13.116280 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-06 05:41:13.116298 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-06 05:41:13.116318 | orchestrator | (): ('Connection aborted.', RemoteDisconnected('Remote end closed 2026-04-06 05:41:13.116357 | orchestrator | connection without response')) 2026-04-06 05:41:13.116377 | orchestrator | 2026-04-06 05:41:13.116396 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-06 05:41:13.116410 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-06 05:41:13.116423 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-06 05:41:13.116449 | orchestrator | Monday 06 April 2026 05:41:09 +0000 (0:00:01.373) 0:00:04.757 ********** 2026-04-06 05:41:13.116462 | orchestrator | included: /ansible/roles/keystone/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:41:13.116476 | orchestrator | 2026-04-06 05:41:13.116488 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-06 05:41:13.116501 | orchestrator | Monday 06 April 2026 05:41:11 +0000 (0:00:01.126) 0:00:05.883 ********** 2026-04-06 05:41:13.116538 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:13.116559 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:13.116620 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:13.116637 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:13.116657 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:13.116672 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:13.116685 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:13.116717 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:13.116748 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:19.247965 | orchestrator | 2026-04-06 05:41:19.248086 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-06 05:41:19.248105 | orchestrator | Monday 06 April 2026 05:41:13 +0000 (0:00:02.175) 0:00:08.058 ********** 2026-04-06 05:41:19.248186 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:41:19.248202 | orchestrator | 2026-04-06 05:41:19.248214 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-06 05:41:19.248225 | orchestrator | Monday 06 April 2026 05:41:13 +0000 (0:00:00.126) 0:00:08.185 ********** 2026-04-06 05:41:19.248237 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:41:19.248248 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:41:19.248259 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:41:19.248270 | orchestrator | 2026-04-06 05:41:19.248281 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-06 05:41:19.248293 | orchestrator | Monday 06 April 2026 05:41:13 +0000 (0:00:00.309) 0:00:08.494 ********** 2026-04-06 05:41:19.248304 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 05:41:19.248315 | orchestrator | 2026-04-06 05:41:19.248326 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-06 05:41:19.248337 | orchestrator | Monday 06 April 2026 05:41:14 +0000 (0:00:01.250) 0:00:09.745 ********** 2026-04-06 05:41:19.248348 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:41:19.248359 | orchestrator | 2026-04-06 05:41:19.248370 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-06 05:41:19.248381 | orchestrator | Monday 06 April 2026 05:41:16 +0000 (0:00:01.209) 0:00:10.954 ********** 2026-04-06 05:41:19.248416 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:19.248456 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:19.248492 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:19.248510 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:19.248529 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:19.248550 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:19.248563 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:19.248577 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:19.248591 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:19.248604 | orchestrator | 2026-04-06 05:41:19.248627 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-06 05:41:20.598875 | orchestrator | Monday 06 April 2026 05:41:19 +0000 (0:00:03.130) 0:00:14.085 ********** 2026-04-06 05:41:20.598958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:41:20.599003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:20.599012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:41:20.599020 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:41:20.599028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:41:20.599048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:20.599055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:41:20.599062 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:41:20.599071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:41:20.599084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:20.599090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:41:20.599097 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:41:20.599103 | orchestrator | 2026-04-06 05:41:20.599110 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-06 05:41:20.599141 | orchestrator | Monday 06 April 2026 05:41:20 +0000 (0:00:01.024) 0:00:15.109 ********** 2026-04-06 05:41:20.599154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:41:22.508440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:22.508583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:41:22.508601 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:41:22.508617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:41:22.508631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:22.508643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:41:22.508655 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:41:22.508689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:41:22.508728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:22.508749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:41:22.508766 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:41:22.508784 | orchestrator | 2026-04-06 05:41:22.508807 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-06 05:41:22.508834 | orchestrator | Monday 06 April 2026 05:41:21 +0000 (0:00:00.903) 0:00:16.012 ********** 2026-04-06 05:41:22.508853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:22.508889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:27.379717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:27.379824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:27.379841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:27.379854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:27.379866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:27.379919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:27.379939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:27.379951 | orchestrator | 2026-04-06 05:41:27.379965 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-06 05:41:27.379977 | orchestrator | Monday 06 April 2026 05:41:24 +0000 (0:00:03.330) 0:00:19.342 ********** 2026-04-06 05:41:27.379990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:27.380002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:27.380015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:27.380043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:33.307269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:33.307385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:33.307403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:33.307416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:33.307453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:33.307466 | orchestrator | 2026-04-06 05:41:33.307480 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-06 05:41:33.307492 | orchestrator | Monday 06 April 2026 05:41:29 +0000 (0:00:05.389) 0:00:24.732 ********** 2026-04-06 05:41:33.307504 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:41:33.307516 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:41:33.307527 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:41:33.307538 | orchestrator | 2026-04-06 05:41:33.307549 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-06 05:41:33.307560 | orchestrator | Monday 06 April 2026 05:41:31 +0000 (0:00:01.436) 0:00:26.168 ********** 2026-04-06 05:41:33.307571 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:41:33.307600 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:41:33.307611 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:41:33.307623 | orchestrator | 2026-04-06 05:41:33.307634 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-06 05:41:33.307645 | orchestrator | Monday 06 April 2026 05:41:31 +0000 (0:00:00.568) 0:00:26.736 ********** 2026-04-06 05:41:33.307655 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:41:33.307666 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:41:33.307677 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:41:33.307688 | orchestrator | 2026-04-06 05:41:33.307705 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-06 05:41:33.307716 | orchestrator | Monday 06 April 2026 05:41:32 +0000 (0:00:00.349) 0:00:27.085 ********** 2026-04-06 05:41:33.307727 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:41:33.307740 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:41:33.307752 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:41:33.307765 | orchestrator | 2026-04-06 05:41:33.307778 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-06 05:41:33.307791 | orchestrator | Monday 06 April 2026 05:41:32 +0000 (0:00:00.520) 0:00:27.606 ********** 2026-04-06 05:41:33.307806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:41:33.307820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:33.307842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:41:33.307856 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:41:33.307878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:41:49.366296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:49.366448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:41:49.366480 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:41:49.366506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:41:49.366564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:41:49.366587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:41:49.366608 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:41:49.366628 | orchestrator | 2026-04-06 05:41:49.366648 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-06 05:41:49.366669 | orchestrator | Monday 06 April 2026 05:41:33 +0000 (0:00:00.678) 0:00:28.284 ********** 2026-04-06 05:41:49.366689 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:41:49.366709 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:41:49.366730 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:41:49.366749 | orchestrator | 2026-04-06 05:41:49.366770 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-06 05:41:49.366815 | orchestrator | Monday 06 April 2026 05:41:33 +0000 (0:00:00.338) 0:00:28.623 ********** 2026-04-06 05:41:49.366836 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-06 05:41:49.366867 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-06 05:41:49.366888 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-06 05:41:49.366907 | orchestrator | 2026-04-06 05:41:49.366924 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-06 05:41:49.366937 | orchestrator | Monday 06 April 2026 05:41:35 +0000 (0:00:02.011) 0:00:30.634 ********** 2026-04-06 05:41:49.366950 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 05:41:49.366963 | orchestrator | 2026-04-06 05:41:49.366975 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-06 05:41:49.366989 | orchestrator | Monday 06 April 2026 05:41:36 +0000 (0:00:00.922) 0:00:31.556 ********** 2026-04-06 05:41:49.367001 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:41:49.367013 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:41:49.367025 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:41:49.367036 | orchestrator | 2026-04-06 05:41:49.367047 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-06 05:41:49.367070 | orchestrator | Monday 06 April 2026 05:41:37 +0000 (0:00:00.495) 0:00:32.052 ********** 2026-04-06 05:41:49.367081 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 05:41:49.367124 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 05:41:49.367147 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 05:41:49.367161 | orchestrator | 2026-04-06 05:41:49.367172 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-06 05:41:49.367183 | orchestrator | Monday 06 April 2026 05:41:38 +0000 (0:00:00.927) 0:00:32.979 ********** 2026-04-06 05:41:49.367195 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:41:49.367206 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:41:49.367217 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:41:49.367228 | orchestrator | 2026-04-06 05:41:49.367239 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-06 05:41:49.367250 | orchestrator | Monday 06 April 2026 05:41:38 +0000 (0:00:00.247) 0:00:33.226 ********** 2026-04-06 05:41:49.367261 | orchestrator | ok: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-06 05:41:49.367273 | orchestrator | ok: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-06 05:41:49.367284 | orchestrator | ok: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-06 05:41:49.367295 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-06 05:41:49.367307 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-06 05:41:49.367319 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-06 05:41:49.367331 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-06 05:41:49.367349 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-06 05:41:49.367359 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-06 05:41:49.367369 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-06 05:41:49.367379 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-06 05:41:49.367389 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-06 05:41:49.367402 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-06 05:41:49.367418 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-06 05:41:49.367434 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-06 05:41:49.367450 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 05:41:49.367466 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 05:41:49.367476 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 05:41:49.367486 | orchestrator | ok: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 05:41:49.367496 | orchestrator | ok: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 05:41:49.367506 | orchestrator | ok: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 05:41:49.367516 | orchestrator | 2026-04-06 05:41:49.367526 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-06 05:41:49.367536 | orchestrator | Monday 06 April 2026 05:41:46 +0000 (0:00:08.571) 0:00:41.798 ********** 2026-04-06 05:41:49.367546 | orchestrator | ok: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 05:41:49.367556 | orchestrator | ok: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 05:41:49.367578 | orchestrator | ok: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 05:41:49.367589 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 05:41:49.367608 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 05:41:53.958409 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 05:41:53.958513 | orchestrator | 2026-04-06 05:41:53.958545 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-06 05:41:53.958559 | orchestrator | Monday 06 April 2026 05:41:49 +0000 (0:00:02.940) 0:00:44.738 ********** 2026-04-06 05:41:53.958576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:53.958593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:53.958607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-06 05:41:53.958657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:53.958676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:53.958688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-06 05:41:53.958700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:53.958713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:53.958724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-06 05:41:53.958736 | orchestrator | 2026-04-06 05:41:53.958747 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-06 05:41:53.958764 | orchestrator | Monday 06 April 2026 05:41:53 +0000 (0:00:03.203) 0:00:47.941 ********** 2026-04-06 05:41:53.958776 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 05:41:53.958787 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:41:53.958799 | orchestrator | } 2026-04-06 05:41:53.958810 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 05:41:53.958821 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:41:53.958832 | orchestrator | } 2026-04-06 05:41:53.958842 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 05:41:53.958853 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:41:53.958864 | orchestrator | } 2026-04-06 05:41:53.958875 | orchestrator | 2026-04-06 05:41:53.958886 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 05:41:53.958898 | orchestrator | Monday 06 April 2026 05:41:53 +0000 (0:00:00.559) 0:00:48.501 ********** 2026-04-06 05:41:53.958923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:43:55.926487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:43:55.926606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:43:55.926624 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:43:55.926641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:43:55.926681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:43:55.926708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:43:55.926720 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:43:55.926751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-06 05:43:55.926765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-06 05:43:55.926776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-06 05:43:55.926796 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:43:55.926808 | orchestrator | 2026-04-06 05:43:55.926820 | orchestrator | TASK [keystone : Enable log_bin_trust_function_creators function] ************** 2026-04-06 05:43:55.926832 | orchestrator | Monday 06 April 2026 05:41:54 +0000 (0:00:01.258) 0:00:49.759 ********** 2026-04-06 05:43:55.926844 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:43:55.926855 | orchestrator | 2026-04-06 05:43:55.926867 | orchestrator | TASK [keystone : Init keystone database upgrade] ******************************* 2026-04-06 05:43:55.926877 | orchestrator | Monday 06 April 2026 05:41:57 +0000 (0:00:02.223) 0:00:51.983 ********** 2026-04-06 05:43:55.926888 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:43:55.926899 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:43:55.926910 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:43:55.926921 | orchestrator | 2026-04-06 05:43:55.926931 | orchestrator | TASK [keystone : Finish keystone database upgrade] ***************************** 2026-04-06 05:43:55.926942 | orchestrator | Monday 06 April 2026 05:41:57 +0000 (0:00:00.465) 0:00:52.448 ********** 2026-04-06 05:43:55.926953 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:43:55.926964 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:43:55.926975 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:43:55.926985 | orchestrator | 2026-04-06 05:43:55.926998 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-06 05:43:55.927010 | orchestrator | Monday 06 April 2026 05:41:58 +0000 (0:00:00.846) 0:00:53.295 ********** 2026-04-06 05:43:55.927071 | orchestrator | 2026-04-06 05:43:55.927085 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-06 05:43:55.927097 | orchestrator | Monday 06 April 2026 05:41:58 +0000 (0:00:00.074) 0:00:53.369 ********** 2026-04-06 05:43:55.927110 | orchestrator | 2026-04-06 05:43:55.927122 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-06 05:43:55.927134 | orchestrator | Monday 06 April 2026 05:41:58 +0000 (0:00:00.074) 0:00:53.444 ********** 2026-04-06 05:43:55.927148 | orchestrator | 2026-04-06 05:43:55.927160 | orchestrator | RUNNING HANDLER [keystone : Init keystone database upgrade] ******************** 2026-04-06 05:43:55.927179 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-06 05:43:55.927193 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-06 05:43:55.927218 | orchestrator | Monday 06 April 2026 05:41:58 +0000 (0:00:00.076) 0:00:53.520 ********** 2026-04-06 05:43:55.927231 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:43:55.927243 | orchestrator | 2026-04-06 05:43:55.927256 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-06 05:43:55.927269 | orchestrator | Monday 06 April 2026 05:43:02 +0000 (0:01:03.794) 0:01:57.315 ********** 2026-04-06 05:43:55.927281 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:43:55.927295 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:43:55.927308 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:43:55.927320 | orchestrator | 2026-04-06 05:43:55.927334 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-06 05:43:55.927354 | orchestrator | Monday 06 April 2026 05:43:55 +0000 (0:00:53.444) 0:02:50.759 ********** 2026-04-06 05:44:36.060532 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:44:36.060649 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:44:36.060665 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:44:36.060678 | orchestrator | 2026-04-06 05:44:36.060691 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-06 05:44:36.060703 | orchestrator | Monday 06 April 2026 05:44:07 +0000 (0:00:11.851) 0:03:02.611 ********** 2026-04-06 05:44:36.060742 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:44:36.060754 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:44:36.060765 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:44:36.060776 | orchestrator | 2026-04-06 05:44:36.060787 | orchestrator | RUNNING HANDLER [keystone : Finish keystone database upgrade] ****************** 2026-04-06 05:44:36.060798 | orchestrator | Monday 06 April 2026 05:44:20 +0000 (0:00:12.895) 0:03:15.506 ********** 2026-04-06 05:44:36.060809 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:44:36.060820 | orchestrator | 2026-04-06 05:44:36.060831 | orchestrator | TASK [keystone : Disable log_bin_trust_function_creators function] ************* 2026-04-06 05:44:36.060842 | orchestrator | Monday 06 April 2026 05:44:32 +0000 (0:00:12.144) 0:03:27.651 ********** 2026-04-06 05:44:36.060853 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:44:36.060864 | orchestrator | 2026-04-06 05:44:36.060875 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 05:44:36.060887 | orchestrator | testbed-node-0 : ok=25  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-06 05:44:36.060900 | orchestrator | testbed-node-1 : ok=19  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-06 05:44:36.060911 | orchestrator | testbed-node-2 : ok=21  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-06 05:44:36.060922 | orchestrator | 2026-04-06 05:44:36.060933 | orchestrator | 2026-04-06 05:44:36.060944 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 05:44:36.060955 | orchestrator | Monday 06 April 2026 05:44:35 +0000 (0:00:02.858) 0:03:30.509 ********** 2026-04-06 05:44:36.060966 | orchestrator | =============================================================================== 2026-04-06 05:44:36.060977 | orchestrator | keystone : Init keystone database upgrade ------------------------------ 63.80s 2026-04-06 05:44:36.060988 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 53.44s 2026-04-06 05:44:36.061021 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.90s 2026-04-06 05:44:36.061033 | orchestrator | keystone : Finish keystone database upgrade ---------------------------- 12.14s 2026-04-06 05:44:36.061044 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 11.85s 2026-04-06 05:44:36.061055 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.57s 2026-04-06 05:44:36.061066 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.39s 2026-04-06 05:44:36.061077 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.33s 2026-04-06 05:44:36.061090 | orchestrator | service-check-containers : keystone | Check containers ------------------ 3.20s 2026-04-06 05:44:36.061104 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.13s 2026-04-06 05:44:36.061117 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.94s 2026-04-06 05:44:36.061130 | orchestrator | keystone : Disable log_bin_trust_function_creators function ------------- 2.86s 2026-04-06 05:44:36.061143 | orchestrator | keystone : Enable log_bin_trust_function_creators function -------------- 2.22s 2026-04-06 05:44:36.061156 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.18s 2026-04-06 05:44:36.061169 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.01s 2026-04-06 05:44:36.061182 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.71s 2026-04-06 05:44:36.061195 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.44s 2026-04-06 05:44:36.061209 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.37s 2026-04-06 05:44:36.061222 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.26s 2026-04-06 05:44:36.061235 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 1.25s 2026-04-06 05:44:36.242273 | orchestrator | + osism apply -a upgrade placement 2026-04-06 05:44:37.516979 | orchestrator | 2026-04-06 05:44:37 | INFO  | Prepare task for execution of placement. 2026-04-06 05:44:37.580134 | orchestrator | 2026-04-06 05:44:37 | INFO  | Task 7e8179b6-d353-4401-a956-7db7f379d8b6 (placement) was prepared for execution. 2026-04-06 05:44:37.580227 | orchestrator | 2026-04-06 05:44:37 | INFO  | It takes a moment until task 7e8179b6-d353-4401-a956-7db7f379d8b6 (placement) has been started and output is visible here. 2026-04-06 05:45:31.563217 | orchestrator | 2026-04-06 05:45:31.563324 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 05:45:31.563339 | orchestrator | 2026-04-06 05:45:31.563350 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 05:45:31.563361 | orchestrator | Monday 06 April 2026 05:44:42 +0000 (0:00:01.724) 0:00:01.724 ********** 2026-04-06 05:45:31.563370 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:45:31.563381 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:45:31.563391 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:45:31.563401 | orchestrator | 2026-04-06 05:45:31.563411 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 05:45:31.563420 | orchestrator | Monday 06 April 2026 05:44:44 +0000 (0:00:01.818) 0:00:03.543 ********** 2026-04-06 05:45:31.563431 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-06 05:45:31.563440 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-06 05:45:31.563450 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-06 05:45:31.563460 | orchestrator | 2026-04-06 05:45:31.563469 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-06 05:45:31.563479 | orchestrator | 2026-04-06 05:45:31.563489 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-06 05:45:31.563498 | orchestrator | Monday 06 April 2026 05:44:45 +0000 (0:00:01.524) 0:00:05.068 ********** 2026-04-06 05:45:31.563508 | orchestrator | included: /ansible/roles/placement/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:45:31.563518 | orchestrator | 2026-04-06 05:45:31.563528 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-06 05:45:31.563538 | orchestrator | Monday 06 April 2026 05:44:48 +0000 (0:00:02.303) 0:00:07.371 ********** 2026-04-06 05:45:31.563547 | orchestrator | ok: [testbed-node-0] => (item=placement (placement)) 2026-04-06 05:45:31.563557 | orchestrator | 2026-04-06 05:45:31.563566 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-04-06 05:45:31.563576 | orchestrator | Monday 06 April 2026 05:44:53 +0000 (0:00:05.219) 0:00:12.591 ********** 2026-04-06 05:45:31.563587 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-06 05:45:31.563597 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-06 05:45:31.563607 | orchestrator | 2026-04-06 05:45:31.563617 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-06 05:45:31.563626 | orchestrator | Monday 06 April 2026 05:45:01 +0000 (0:00:07.988) 0:00:20.580 ********** 2026-04-06 05:45:31.563636 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 05:45:31.563646 | orchestrator | 2026-04-06 05:45:31.563655 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-06 05:45:31.563665 | orchestrator | Monday 06 April 2026 05:45:06 +0000 (0:00:04.545) 0:00:25.125 ********** 2026-04-06 05:45:31.563675 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-06 05:45:31.563684 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 05:45:31.563694 | orchestrator | 2026-04-06 05:45:31.563704 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-06 05:45:31.563713 | orchestrator | Monday 06 April 2026 05:45:12 +0000 (0:00:06.677) 0:00:31.803 ********** 2026-04-06 05:45:31.563746 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 05:45:31.563758 | orchestrator | 2026-04-06 05:45:31.563769 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-04-06 05:45:31.563780 | orchestrator | Monday 06 April 2026 05:45:16 +0000 (0:00:04.249) 0:00:36.052 ********** 2026-04-06 05:45:31.563792 | orchestrator | ok: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-06 05:45:31.563803 | orchestrator | 2026-04-06 05:45:31.563814 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-06 05:45:31.563825 | orchestrator | Monday 06 April 2026 05:45:21 +0000 (0:00:05.041) 0:00:41.094 ********** 2026-04-06 05:45:31.563836 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:45:31.563847 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:45:31.563858 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:45:31.563870 | orchestrator | 2026-04-06 05:45:31.563881 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-06 05:45:31.563892 | orchestrator | Monday 06 April 2026 05:45:23 +0000 (0:00:01.744) 0:00:42.838 ********** 2026-04-06 05:45:31.563940 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:31.563957 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:31.563991 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:31.564011 | orchestrator | 2026-04-06 05:45:31.564021 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-06 05:45:31.564032 | orchestrator | Monday 06 April 2026 05:45:26 +0000 (0:00:02.267) 0:00:45.105 ********** 2026-04-06 05:45:31.564041 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:45:31.564051 | orchestrator | 2026-04-06 05:45:31.564060 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-06 05:45:31.564070 | orchestrator | Monday 06 April 2026 05:45:27 +0000 (0:00:01.126) 0:00:46.232 ********** 2026-04-06 05:45:31.564079 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:45:31.564089 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:45:31.564098 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:45:31.564108 | orchestrator | 2026-04-06 05:45:31.564117 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-06 05:45:31.564127 | orchestrator | Monday 06 April 2026 05:45:28 +0000 (0:00:01.317) 0:00:47.550 ********** 2026-04-06 05:45:31.564136 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:45:31.564146 | orchestrator | 2026-04-06 05:45:31.564156 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-06 05:45:31.564165 | orchestrator | Monday 06 April 2026 05:45:30 +0000 (0:00:01.916) 0:00:49.466 ********** 2026-04-06 05:45:31.564186 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:34.894401 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:34.894525 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:34.894569 | orchestrator | 2026-04-06 05:45:34.894584 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-06 05:45:34.894596 | orchestrator | Monday 06 April 2026 05:45:32 +0000 (0:00:02.309) 0:00:51.775 ********** 2026-04-06 05:45:34.894610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:45:34.894622 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:45:34.894669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:45:34.894683 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:45:34.894695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:45:34.894715 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:45:34.894726 | orchestrator | 2026-04-06 05:45:34.894737 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-06 05:45:34.894748 | orchestrator | Monday 06 April 2026 05:45:34 +0000 (0:00:01.775) 0:00:53.551 ********** 2026-04-06 05:45:34.894760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:45:34.894772 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:45:34.894788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:45:34.894801 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:45:34.894821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:45:50.075599 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:45:50.075686 | orchestrator | 2026-04-06 05:45:50.075696 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-06 05:45:50.075705 | orchestrator | Monday 06 April 2026 05:45:35 +0000 (0:00:01.503) 0:00:55.055 ********** 2026-04-06 05:45:50.075715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:50.075725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:50.075747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:50.075755 | orchestrator | 2026-04-06 05:45:50.075762 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-06 05:45:50.075769 | orchestrator | Monday 06 April 2026 05:45:38 +0000 (0:00:02.483) 0:00:57.539 ********** 2026-04-06 05:45:50.075787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:50.075813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:50.075821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:50.075828 | orchestrator | 2026-04-06 05:45:50.075835 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-06 05:45:50.075842 | orchestrator | Monday 06 April 2026 05:45:42 +0000 (0:00:03.810) 0:01:01.350 ********** 2026-04-06 05:45:50.075849 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-06 05:45:50.075860 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:45:50.075867 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-06 05:45:50.075874 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:45:50.075881 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-06 05:45:50.075887 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:45:50.075893 | orchestrator | 2026-04-06 05:45:50.075899 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-04-06 05:45:50.075910 | orchestrator | Monday 06 April 2026 05:45:43 +0000 (0:00:01.574) 0:01:02.925 ********** 2026-04-06 05:45:50.075916 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:45:50.075924 | orchestrator | 2026-04-06 05:45:50.075931 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-04-06 05:45:50.075937 | orchestrator | Monday 06 April 2026 05:45:45 +0000 (0:00:02.012) 0:01:04.938 ********** 2026-04-06 05:45:50.075944 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:45:50.075951 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:45:50.075958 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:45:50.076015 | orchestrator | 2026-04-06 05:45:50.076023 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-06 05:45:50.076029 | orchestrator | Monday 06 April 2026 05:45:48 +0000 (0:00:02.847) 0:01:07.785 ********** 2026-04-06 05:45:50.076036 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:45:50.076044 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:45:50.076051 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:45:50.076058 | orchestrator | 2026-04-06 05:45:50.076068 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-06 05:45:57.187865 | orchestrator | Monday 06 April 2026 05:45:51 +0000 (0:00:02.419) 0:01:10.205 ********** 2026-04-06 05:45:57.188022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:45:57.188045 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:45:57.188060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:45:57.188072 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:45:57.188101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:45:57.188135 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:45:57.188147 | orchestrator | 2026-04-06 05:45:57.188159 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-04-06 05:45:57.188170 | orchestrator | Monday 06 April 2026 05:45:53 +0000 (0:00:02.151) 0:01:12.357 ********** 2026-04-06 05:45:57.188201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:57.188214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:57.188228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 05:45:57.188248 | orchestrator | 2026-04-06 05:45:57.188260 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-04-06 05:45:57.188272 | orchestrator | Monday 06 April 2026 05:45:55 +0000 (0:00:02.350) 0:01:14.707 ********** 2026-04-06 05:45:57.188283 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 05:45:57.188294 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:45:57.188311 | orchestrator | } 2026-04-06 05:45:57.188323 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 05:45:57.188334 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:45:57.188345 | orchestrator | } 2026-04-06 05:45:57.188356 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 05:45:57.188366 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:45:57.188377 | orchestrator | } 2026-04-06 05:45:57.188389 | orchestrator | 2026-04-06 05:45:57.188400 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 05:45:57.188413 | orchestrator | Monday 06 April 2026 05:45:56 +0000 (0:00:01.345) 0:01:16.053 ********** 2026-04-06 05:45:57.188435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:46:47.226605 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:46:47.226691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:46:47.226701 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:46:47.226708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 05:46:47.226732 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:46:47.226738 | orchestrator | 2026-04-06 05:46:47.226744 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-06 05:46:47.226750 | orchestrator | Monday 06 April 2026 05:45:59 +0000 (0:00:02.107) 0:01:18.161 ********** 2026-04-06 05:46:47.226756 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:46:47.226761 | orchestrator | 2026-04-06 05:46:47.226766 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-06 05:46:47.226772 | orchestrator | Monday 06 April 2026 05:46:02 +0000 (0:00:03.079) 0:01:21.241 ********** 2026-04-06 05:46:47.226777 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:46:47.226782 | orchestrator | 2026-04-06 05:46:47.226797 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-06 05:46:47.226802 | orchestrator | Monday 06 April 2026 05:46:05 +0000 (0:00:03.518) 0:01:24.759 ********** 2026-04-06 05:46:47.226808 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:46:47.226813 | orchestrator | 2026-04-06 05:46:47.226818 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-06 05:46:47.226823 | orchestrator | Monday 06 April 2026 05:46:21 +0000 (0:00:15.535) 0:01:40.295 ********** 2026-04-06 05:46:47.226828 | orchestrator | 2026-04-06 05:46:47.226833 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-06 05:46:47.226838 | orchestrator | Monday 06 April 2026 05:46:21 +0000 (0:00:00.448) 0:01:40.743 ********** 2026-04-06 05:46:47.226843 | orchestrator | 2026-04-06 05:46:47.226848 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-06 05:46:47.226853 | orchestrator | Monday 06 April 2026 05:46:22 +0000 (0:00:00.442) 0:01:41.186 ********** 2026-04-06 05:46:47.226858 | orchestrator | 2026-04-06 05:46:47.226864 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-06 05:46:47.226869 | orchestrator | Monday 06 April 2026 05:46:22 +0000 (0:00:00.835) 0:01:42.021 ********** 2026-04-06 05:46:47.226874 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:46:47.226879 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:46:47.226884 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:46:47.226889 | orchestrator | 2026-04-06 05:46:47.226895 | orchestrator | TASK [placement : Perform Placement online data migration] ********************* 2026-04-06 05:46:47.226900 | orchestrator | Monday 06 April 2026 05:46:35 +0000 (0:00:12.468) 0:01:54.489 ********** 2026-04-06 05:46:47.226905 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:46:47.226910 | orchestrator | 2026-04-06 05:46:47.226915 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 05:46:47.226921 | orchestrator | testbed-node-0 : ok=24  changed=9  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-06 05:46:47.226938 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 05:46:47.226943 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 05:46:47.227016 | orchestrator | 2026-04-06 05:46:47.227025 | orchestrator | 2026-04-06 05:46:47.227034 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 05:46:47.227043 | orchestrator | Monday 06 April 2026 05:46:46 +0000 (0:00:11.529) 0:02:06.019 ********** 2026-04-06 05:46:47.227058 | orchestrator | =============================================================================== 2026-04-06 05:46:47.227067 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.53s 2026-04-06 05:46:47.227075 | orchestrator | placement : Restart placement-api container ---------------------------- 12.47s 2026-04-06 05:46:47.227084 | orchestrator | placement : Perform Placement online data migration -------------------- 11.53s 2026-04-06 05:46:47.227093 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 7.99s 2026-04-06 05:46:47.227101 | orchestrator | service-ks-register : placement | Creating users ------------------------ 6.68s 2026-04-06 05:46:47.227106 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 5.22s 2026-04-06 05:46:47.227111 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 5.04s 2026-04-06 05:46:47.227116 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.55s 2026-04-06 05:46:47.227121 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.25s 2026-04-06 05:46:47.227126 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.81s 2026-04-06 05:46:47.227131 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.52s 2026-04-06 05:46:47.227137 | orchestrator | placement : Creating placement databases -------------------------------- 3.08s 2026-04-06 05:46:47.227146 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.85s 2026-04-06 05:46:47.227154 | orchestrator | placement : Copying over config.json files for services ----------------- 2.48s 2026-04-06 05:46:47.227163 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.42s 2026-04-06 05:46:47.227172 | orchestrator | service-check-containers : placement | Check containers ----------------- 2.35s 2026-04-06 05:46:47.227181 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.31s 2026-04-06 05:46:47.227187 | orchestrator | placement : include_tasks ----------------------------------------------- 2.30s 2026-04-06 05:46:47.227194 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.27s 2026-04-06 05:46:47.227200 | orchestrator | placement : Copying over existing policy file --------------------------- 2.15s 2026-04-06 05:46:47.427368 | orchestrator | + osism apply -a upgrade neutron 2026-04-06 05:46:48.714382 | orchestrator | 2026-04-06 05:46:48 | INFO  | Prepare task for execution of neutron. 2026-04-06 05:46:48.783519 | orchestrator | 2026-04-06 05:46:48 | INFO  | Task 8ce769d1-6d22-4d8b-8d09-b600cc2df65f (neutron) was prepared for execution. 2026-04-06 05:46:48.783588 | orchestrator | 2026-04-06 05:46:48 | INFO  | It takes a moment until task 8ce769d1-6d22-4d8b-8d09-b600cc2df65f (neutron) has been started and output is visible here. 2026-04-06 05:47:26.049573 | orchestrator | 2026-04-06 05:47:26.049687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 05:47:26.049703 | orchestrator | 2026-04-06 05:47:26.049728 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 05:47:26.049739 | orchestrator | Monday 06 April 2026 05:46:54 +0000 (0:00:01.823) 0:00:01.823 ********** 2026-04-06 05:47:26.049749 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:47:26.049760 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:47:26.049770 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:47:26.049780 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:47:26.049789 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:47:26.049799 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:47:26.049809 | orchestrator | 2026-04-06 05:47:26.049819 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 05:47:26.049829 | orchestrator | Monday 06 April 2026 05:46:56 +0000 (0:00:02.526) 0:00:04.349 ********** 2026-04-06 05:47:26.049839 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-06 05:47:26.049849 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-06 05:47:26.049859 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-06 05:47:26.049892 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-06 05:47:26.049903 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-06 05:47:26.049912 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-06 05:47:26.049922 | orchestrator | 2026-04-06 05:47:26.049931 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-06 05:47:26.049941 | orchestrator | 2026-04-06 05:47:26.050007 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-06 05:47:26.050072 | orchestrator | Monday 06 April 2026 05:46:58 +0000 (0:00:02.309) 0:00:06.659 ********** 2026-04-06 05:47:26.050084 | orchestrator | included: /ansible/roles/neutron/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:47:26.050095 | orchestrator | 2026-04-06 05:47:26.050104 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-06 05:47:26.050114 | orchestrator | Monday 06 April 2026 05:47:02 +0000 (0:00:04.001) 0:00:10.661 ********** 2026-04-06 05:47:26.050126 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:47:26.050138 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:47:26.050149 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:47:26.050161 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:47:26.050173 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:47:26.050184 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:47:26.050196 | orchestrator | 2026-04-06 05:47:26.050207 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-06 05:47:26.050218 | orchestrator | Monday 06 April 2026 05:47:05 +0000 (0:00:02.998) 0:00:13.659 ********** 2026-04-06 05:47:26.050229 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:47:26.050241 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:47:26.050252 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:47:26.050263 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:47:26.050275 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:47:26.050287 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:47:26.050298 | orchestrator | 2026-04-06 05:47:26.050332 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-06 05:47:26.050344 | orchestrator | Monday 06 April 2026 05:47:08 +0000 (0:00:02.361) 0:00:16.021 ********** 2026-04-06 05:47:26.050356 | orchestrator | ok: [testbed-node-0] => { 2026-04-06 05:47:26.050368 | orchestrator |  "changed": false, 2026-04-06 05:47:26.050380 | orchestrator |  "msg": "All assertions passed" 2026-04-06 05:47:26.050391 | orchestrator | } 2026-04-06 05:47:26.050403 | orchestrator | ok: [testbed-node-1] => { 2026-04-06 05:47:26.050415 | orchestrator |  "changed": false, 2026-04-06 05:47:26.050426 | orchestrator |  "msg": "All assertions passed" 2026-04-06 05:47:26.050438 | orchestrator | } 2026-04-06 05:47:26.050449 | orchestrator | ok: [testbed-node-2] => { 2026-04-06 05:47:26.050466 | orchestrator |  "changed": false, 2026-04-06 05:47:26.050482 | orchestrator |  "msg": "All assertions passed" 2026-04-06 05:47:26.050499 | orchestrator | } 2026-04-06 05:47:26.050518 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 05:47:26.050542 | orchestrator |  "changed": false, 2026-04-06 05:47:26.050558 | orchestrator |  "msg": "All assertions passed" 2026-04-06 05:47:26.050573 | orchestrator | } 2026-04-06 05:47:26.050588 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 05:47:26.050603 | orchestrator |  "changed": false, 2026-04-06 05:47:26.050638 | orchestrator |  "msg": "All assertions passed" 2026-04-06 05:47:26.050654 | orchestrator | } 2026-04-06 05:47:26.050669 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 05:47:26.050686 | orchestrator |  "changed": false, 2026-04-06 05:47:26.050702 | orchestrator |  "msg": "All assertions passed" 2026-04-06 05:47:26.050717 | orchestrator | } 2026-04-06 05:47:26.050732 | orchestrator | 2026-04-06 05:47:26.050747 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-06 05:47:26.050764 | orchestrator | Monday 06 April 2026 05:47:09 +0000 (0:00:01.723) 0:00:17.744 ********** 2026-04-06 05:47:26.050796 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:47:26.050812 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:47:26.050827 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:47:26.050842 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:47:26.050857 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:47:26.050873 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:47:26.050889 | orchestrator | 2026-04-06 05:47:26.050905 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-06 05:47:26.050920 | orchestrator | Monday 06 April 2026 05:47:12 +0000 (0:00:02.205) 0:00:19.950 ********** 2026-04-06 05:47:26.050939 | orchestrator | included: /ansible/roles/neutron/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:47:26.050989 | orchestrator | 2026-04-06 05:47:26.051007 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-06 05:47:26.051025 | orchestrator | Monday 06 April 2026 05:47:14 +0000 (0:00:02.424) 0:00:22.375 ********** 2026-04-06 05:47:26.051042 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:47:26.051059 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:47:26.051076 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:47:26.051092 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:47:26.051131 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:47:26.051149 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:47:26.051166 | orchestrator | 2026-04-06 05:47:26.051195 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-06 05:47:26.051213 | orchestrator | Monday 06 April 2026 05:47:18 +0000 (0:00:03.604) 0:00:25.979 ********** 2026-04-06 05:47:26.051230 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:47:26.051247 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:47:26.051265 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:47:26.051284 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:47:26.051301 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:47:26.051318 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:47:26.051337 | orchestrator | 2026-04-06 05:47:26.051354 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-06 05:47:26.051394 | orchestrator | Monday 06 April 2026 05:47:20 +0000 (0:00:02.095) 0:00:28.075 ********** 2026-04-06 05:47:26.051416 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:47:26.051434 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:47:26.051452 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:47:26.051471 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:47:26.051489 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:47:26.051527 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:47:26.051545 | orchestrator | 2026-04-06 05:47:26.051561 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-06 05:47:26.051579 | orchestrator | Monday 06 April 2026 05:47:23 +0000 (0:00:03.678) 0:00:31.754 ********** 2026-04-06 05:47:26.051605 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:47:26.051631 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:47:26.051667 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:47:26.051708 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:47:37.904855 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:47:37.905026 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:47:37.905071 | orchestrator | 2026-04-06 05:47:37.905086 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-06 05:47:37.905099 | orchestrator | Monday 06 April 2026 05:47:27 +0000 (0:00:03.406) 0:00:35.160 ********** 2026-04-06 05:47:37.905110 | orchestrator | [WARNING]: Skipped 2026-04-06 05:47:37.905122 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-06 05:47:37.905134 | orchestrator | due to this access issue: 2026-04-06 05:47:37.905146 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-06 05:47:37.905157 | orchestrator | a directory 2026-04-06 05:47:37.905168 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 05:47:37.905180 | orchestrator | 2026-04-06 05:47:37.905191 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-06 05:47:37.905202 | orchestrator | Monday 06 April 2026 05:47:29 +0000 (0:00:02.259) 0:00:37.419 ********** 2026-04-06 05:47:37.905214 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:47:37.905227 | orchestrator | 2026-04-06 05:47:37.905238 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-06 05:47:37.905249 | orchestrator | Monday 06 April 2026 05:47:32 +0000 (0:00:02.772) 0:00:40.192 ********** 2026-04-06 05:47:37.905263 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:47:37.905312 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:47:37.905327 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:47:37.905349 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:47:37.905362 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:47:37.905378 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:47:37.905393 | orchestrator | 2026-04-06 05:47:37.905406 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-06 05:47:37.905419 | orchestrator | Monday 06 April 2026 05:47:36 +0000 (0:00:04.060) 0:00:44.253 ********** 2026-04-06 05:47:37.905441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:47:42.170727 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:47:42.170862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:47:42.170895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:47:42.170917 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:47:42.170929 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:47:42.171014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:47:42.171029 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:47:42.171041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:47:42.171077 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:47:42.171109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:47:42.171121 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:47:42.171132 | orchestrator | 2026-04-06 05:47:42.171144 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-06 05:47:42.171156 | orchestrator | Monday 06 April 2026 05:47:39 +0000 (0:00:03.515) 0:00:47.768 ********** 2026-04-06 05:47:42.171168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:47:42.171181 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:47:42.171198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:47:42.171210 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:47:42.171222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:47:42.171242 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:47:42.171261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:47:52.934813 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:47:52.934929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:47:52.935002 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:47:52.935014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:47:52.935026 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:47:52.935036 | orchestrator | 2026-04-06 05:47:52.935048 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-06 05:47:52.935059 | orchestrator | Monday 06 April 2026 05:47:43 +0000 (0:00:04.002) 0:00:51.771 ********** 2026-04-06 05:47:52.935068 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:47:52.935079 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:47:52.935089 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:47:52.935099 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:47:52.935110 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:47:52.935120 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:47:52.935157 | orchestrator | 2026-04-06 05:47:52.935176 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-06 05:47:52.935185 | orchestrator | Monday 06 April 2026 05:47:47 +0000 (0:00:03.571) 0:00:55.343 ********** 2026-04-06 05:47:52.935195 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:47:52.935204 | orchestrator | 2026-04-06 05:47:52.935215 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-06 05:47:52.935225 | orchestrator | Monday 06 April 2026 05:47:48 +0000 (0:00:01.153) 0:00:56.496 ********** 2026-04-06 05:47:52.935234 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:47:52.935244 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:47:52.935253 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:47:52.935262 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:47:52.935272 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:47:52.935282 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:47:52.935292 | orchestrator | 2026-04-06 05:47:52.935303 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-06 05:47:52.935313 | orchestrator | Monday 06 April 2026 05:47:50 +0000 (0:00:01.991) 0:00:58.488 ********** 2026-04-06 05:47:52.935327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:47:52.935341 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:47:52.935375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:47:52.935389 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:47:52.935402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:47:52.935422 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:47:52.935433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:47:52.935441 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:47:52.935448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:47:52.935454 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:47:52.935467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:48:03.173104 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:03.173220 | orchestrator | 2026-04-06 05:48:03.173236 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-06 05:48:03.173249 | orchestrator | Monday 06 April 2026 05:47:54 +0000 (0:00:03.441) 0:01:01.929 ********** 2026-04-06 05:48:03.173314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:48:03.173360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:48:03.173376 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:48:03.173390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:48:03.173421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:48:03.173452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:48:03.173473 | orchestrator | 2026-04-06 05:48:03.173485 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-06 05:48:03.173496 | orchestrator | Monday 06 April 2026 05:47:58 +0000 (0:00:04.137) 0:01:06.067 ********** 2026-04-06 05:48:03.173513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:48:03.173526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:48:03.173547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:48:06.998298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:48:06.998444 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:48:06.998463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:48:06.998475 | orchestrator | 2026-04-06 05:48:06.998488 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-06 05:48:06.998501 | orchestrator | Monday 06 April 2026 05:48:04 +0000 (0:00:06.670) 0:01:12.737 ********** 2026-04-06 05:48:06.998513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:48:06.998526 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:48:06.998559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:48:06.998579 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:48:06.998596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:48:06.998609 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:48:06.998620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:48:06.998632 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:48:06.998643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:48:06.998655 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:48:06.998675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:48:33.805230 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:33.805341 | orchestrator | 2026-04-06 05:48:33.805358 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-06 05:48:33.805370 | orchestrator | Monday 06 April 2026 05:48:08 +0000 (0:00:03.159) 0:01:15.897 ********** 2026-04-06 05:48:33.805382 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:48:33.805393 | orchestrator | ok: [testbed-node-1] 2026-04-06 05:48:33.805405 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:48:33.805417 | orchestrator | ok: [testbed-node-2] 2026-04-06 05:48:33.805427 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:48:33.805438 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:33.805449 | orchestrator | 2026-04-06 05:48:33.805460 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-06 05:48:33.805472 | orchestrator | Monday 06 April 2026 05:48:11 +0000 (0:00:03.700) 0:01:19.597 ********** 2026-04-06 05:48:33.805502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:48:33.805517 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:48:33.805529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:48:33.805541 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:33.805552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:48:33.805564 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:48:33.805578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:48:33.805637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:48:33.805658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:48:33.805674 | orchestrator | 2026-04-06 05:48:33.805693 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-06 05:48:33.805713 | orchestrator | Monday 06 April 2026 05:48:16 +0000 (0:00:05.109) 0:01:24.708 ********** 2026-04-06 05:48:33.805725 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:48:33.805736 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:48:33.805747 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:48:33.805758 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:48:33.805769 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:33.805780 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:48:33.805791 | orchestrator | 2026-04-06 05:48:33.805802 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-06 05:48:33.805813 | orchestrator | Monday 06 April 2026 05:48:20 +0000 (0:00:03.409) 0:01:28.117 ********** 2026-04-06 05:48:33.805824 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:48:33.805835 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:48:33.805854 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:48:33.805865 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:48:33.805875 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:48:33.805886 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:33.805897 | orchestrator | 2026-04-06 05:48:33.805908 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-06 05:48:33.805919 | orchestrator | Monday 06 April 2026 05:48:23 +0000 (0:00:03.193) 0:01:31.311 ********** 2026-04-06 05:48:33.805930 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:48:33.805967 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:48:33.805979 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:48:33.805990 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:48:33.806001 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:48:33.806011 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:33.806084 | orchestrator | 2026-04-06 05:48:33.806096 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-06 05:48:33.806107 | orchestrator | Monday 06 April 2026 05:48:26 +0000 (0:00:03.458) 0:01:34.769 ********** 2026-04-06 05:48:33.806118 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:48:33.806129 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:48:33.806140 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:48:33.806151 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:48:33.806162 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:48:33.806172 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:33.806184 | orchestrator | 2026-04-06 05:48:33.806203 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-06 05:48:33.806217 | orchestrator | Monday 06 April 2026 05:48:30 +0000 (0:00:03.321) 0:01:38.091 ********** 2026-04-06 05:48:33.806228 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:48:33.806238 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:48:33.806249 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:48:33.806260 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:48:33.806271 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:48:33.806282 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:33.806293 | orchestrator | 2026-04-06 05:48:33.806304 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-06 05:48:33.806323 | orchestrator | Monday 06 April 2026 05:48:33 +0000 (0:00:03.484) 0:01:41.576 ********** 2026-04-06 05:48:42.145647 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 05:48:42.145782 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:48:42.145807 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 05:48:42.145827 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:48:42.145845 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 05:48:42.145900 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:48:42.145919 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 05:48:42.145967 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:48:42.145986 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 05:48:42.146005 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:42.146086 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-06 05:48:42.146107 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:48:42.146127 | orchestrator | 2026-04-06 05:48:42.146146 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-06 05:48:42.146165 | orchestrator | Monday 06 April 2026 05:48:37 +0000 (0:00:03.310) 0:01:44.886 ********** 2026-04-06 05:48:42.146212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:48:42.146263 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:48:42.146282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:48:42.146301 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:48:42.146344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:48:42.146365 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:48:42.146382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:48:42.146402 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:48:42.146430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:48:42.146463 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:48:42.146480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:48:42.146498 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:48:42.146514 | orchestrator | 2026-04-06 05:48:42.146530 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-06 05:48:42.146545 | orchestrator | Monday 06 April 2026 05:48:40 +0000 (0:00:03.473) 0:01:48.360 ********** 2026-04-06 05:48:42.146561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:48:42.146580 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:48:42.146612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:49:19.220485 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.220620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:49:19.220643 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.220657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:49:19.220670 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:19.220683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:49:19.220694 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:19.220706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:49:19.220717 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:19.220728 | orchestrator | 2026-04-06 05:49:19.220740 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-06 05:49:19.220753 | orchestrator | Monday 06 April 2026 05:48:44 +0000 (0:00:03.684) 0:01:52.044 ********** 2026-04-06 05:49:19.220789 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:19.220800 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.220811 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.220822 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:19.220832 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:19.220843 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:19.220854 | orchestrator | 2026-04-06 05:49:19.220865 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-06 05:49:19.220893 | orchestrator | Monday 06 April 2026 05:48:47 +0000 (0:00:03.224) 0:01:55.268 ********** 2026-04-06 05:49:19.220904 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:19.220915 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.220952 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.220963 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:49:19.220974 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:49:19.220984 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:49:19.220995 | orchestrator | 2026-04-06 05:49:19.221006 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-06 05:49:19.221017 | orchestrator | Monday 06 April 2026 05:48:52 +0000 (0:00:05.195) 0:02:00.464 ********** 2026-04-06 05:49:19.221030 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.221042 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.221055 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:19.221068 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:19.221086 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:19.221101 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:19.221113 | orchestrator | 2026-04-06 05:49:19.221126 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-06 05:49:19.221139 | orchestrator | Monday 06 April 2026 05:48:55 +0000 (0:00:03.253) 0:02:03.718 ********** 2026-04-06 05:49:19.221152 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:19.221166 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.221178 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.221191 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:19.221203 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:19.221216 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:19.221229 | orchestrator | 2026-04-06 05:49:19.221241 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-06 05:49:19.221254 | orchestrator | Monday 06 April 2026 05:48:59 +0000 (0:00:03.376) 0:02:07.095 ********** 2026-04-06 05:49:19.221266 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:19.221278 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.221291 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.221304 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:19.221316 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:19.221329 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:19.221341 | orchestrator | 2026-04-06 05:49:19.221355 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-06 05:49:19.221369 | orchestrator | Monday 06 April 2026 05:49:02 +0000 (0:00:03.363) 0:02:10.458 ********** 2026-04-06 05:49:19.221381 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:19.221392 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.221402 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:19.221413 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.221424 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:19.221434 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:19.221445 | orchestrator | 2026-04-06 05:49:19.221456 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-06 05:49:19.221467 | orchestrator | Monday 06 April 2026 05:49:06 +0000 (0:00:03.569) 0:02:14.028 ********** 2026-04-06 05:49:19.221477 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:19.221488 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.221499 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.221518 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:19.221528 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:19.221539 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:19.221550 | orchestrator | 2026-04-06 05:49:19.221561 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-06 05:49:19.221571 | orchestrator | Monday 06 April 2026 05:49:09 +0000 (0:00:03.248) 0:02:17.277 ********** 2026-04-06 05:49:19.221582 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:19.221593 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.221603 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.221614 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:19.221625 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:19.221635 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:19.221646 | orchestrator | 2026-04-06 05:49:19.221657 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-06 05:49:19.221668 | orchestrator | Monday 06 April 2026 05:49:13 +0000 (0:00:03.567) 0:02:20.845 ********** 2026-04-06 05:49:19.221678 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.221689 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:19.221700 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.221710 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:19.221721 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:19.221732 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:19.221742 | orchestrator | 2026-04-06 05:49:19.221753 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-06 05:49:19.221764 | orchestrator | Monday 06 April 2026 05:49:16 +0000 (0:00:03.762) 0:02:24.608 ********** 2026-04-06 05:49:19.221775 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 05:49:19.221787 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:19.221798 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 05:49:19.221808 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:19.221819 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 05:49:19.221830 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:19.221841 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 05:49:19.221852 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:19.221862 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 05:49:19.221873 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:19.221884 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-06 05:49:19.221895 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:19.221905 | orchestrator | 2026-04-06 05:49:19.221959 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-06 05:49:26.526568 | orchestrator | Monday 06 April 2026 05:49:20 +0000 (0:00:03.363) 0:02:27.971 ********** 2026-04-06 05:49:26.526682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:49:26.526715 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:26.526725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:49:26.526731 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:26.526738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:49:26.526745 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:26.526753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:49:26.526760 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:49:26.526784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:49:26.526797 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:26.526803 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:49:26.526810 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:49:26.526817 | orchestrator | 2026-04-06 05:49:26.526824 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-04-06 05:49:26.526832 | orchestrator | Monday 06 April 2026 05:49:23 +0000 (0:00:03.774) 0:02:31.746 ********** 2026-04-06 05:49:26.526839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:49:26.526846 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:49:26.526857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:49:31.672438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:49:31.672549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:49:31.672566 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-06 05:49:31.672579 | orchestrator | 2026-04-06 05:49:31.672593 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-04-06 05:49:31.672605 | orchestrator | Monday 06 April 2026 05:49:27 +0000 (0:00:04.015) 0:02:35.762 ********** 2026-04-06 05:49:31.672617 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 05:49:31.672629 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:49:31.672641 | orchestrator | } 2026-04-06 05:49:31.672652 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 05:49:31.672663 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:49:31.672674 | orchestrator | } 2026-04-06 05:49:31.672685 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 05:49:31.672696 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:49:31.672707 | orchestrator | } 2026-04-06 05:49:31.672718 | orchestrator | changed: [testbed-node-3] => { 2026-04-06 05:49:31.672729 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:49:31.672740 | orchestrator | } 2026-04-06 05:49:31.672751 | orchestrator | changed: [testbed-node-4] => { 2026-04-06 05:49:31.672762 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:49:31.672773 | orchestrator | } 2026-04-06 05:49:31.672783 | orchestrator | changed: [testbed-node-5] => { 2026-04-06 05:49:31.672795 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:49:31.672832 | orchestrator | } 2026-04-06 05:49:31.672844 | orchestrator | 2026-04-06 05:49:31.672855 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 05:49:31.672866 | orchestrator | Monday 06 April 2026 05:49:29 +0000 (0:00:01.791) 0:02:37.553 ********** 2026-04-06 05:49:31.672903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:49:31.672917 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:49:31.673000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:49:31.673014 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:49:31.673028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:49:31.673041 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:49:31.673055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:49:31.673077 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:49:31.673104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:52:33.687173 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:52:33.687285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-06 05:52:33.687302 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:52:33.687313 | orchestrator | 2026-04-06 05:52:33.687323 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-06 05:52:33.687334 | orchestrator | Monday 06 April 2026 05:49:33 +0000 (0:00:03.874) 0:02:41.427 ********** 2026-04-06 05:52:33.687344 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:52:33.687354 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:52:33.687364 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:52:33.687373 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:52:33.687383 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:52:33.687393 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:52:33.687403 | orchestrator | 2026-04-06 05:52:33.687413 | orchestrator | TASK [neutron : Running Neutron database expand container] ********************* 2026-04-06 05:52:33.687423 | orchestrator | Monday 06 April 2026 05:49:35 +0000 (0:00:01.764) 0:02:43.192 ********** 2026-04-06 05:52:33.687432 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:52:33.687442 | orchestrator | 2026-04-06 05:52:33.687452 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.687462 | orchestrator | Monday 06 April 2026 05:50:10 +0000 (0:00:34.713) 0:03:17.905 ********** 2026-04-06 05:52:33.687471 | orchestrator | 2026-04-06 05:52:33.687481 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.687491 | orchestrator | Monday 06 April 2026 05:50:10 +0000 (0:00:00.444) 0:03:18.350 ********** 2026-04-06 05:52:33.687501 | orchestrator | 2026-04-06 05:52:33.687510 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.687520 | orchestrator | Monday 06 April 2026 05:50:11 +0000 (0:00:00.440) 0:03:18.791 ********** 2026-04-06 05:52:33.687530 | orchestrator | 2026-04-06 05:52:33.687539 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.687549 | orchestrator | Monday 06 April 2026 05:50:11 +0000 (0:00:00.651) 0:03:19.443 ********** 2026-04-06 05:52:33.687581 | orchestrator | 2026-04-06 05:52:33.687591 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.687601 | orchestrator | Monday 06 April 2026 05:50:12 +0000 (0:00:00.451) 0:03:19.894 ********** 2026-04-06 05:52:33.687610 | orchestrator | 2026-04-06 05:52:33.687620 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.687630 | orchestrator | Monday 06 April 2026 05:50:12 +0000 (0:00:00.448) 0:03:20.343 ********** 2026-04-06 05:52:33.687639 | orchestrator | 2026-04-06 05:52:33.687649 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-06 05:52:33.687658 | orchestrator | Monday 06 April 2026 05:50:13 +0000 (0:00:00.831) 0:03:21.175 ********** 2026-04-06 05:52:33.687668 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:52:33.687678 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:52:33.687688 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:52:33.687697 | orchestrator | 2026-04-06 05:52:33.687707 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-06 05:52:33.687717 | orchestrator | Monday 06 April 2026 05:50:57 +0000 (0:00:43.821) 0:04:04.996 ********** 2026-04-06 05:52:33.687727 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:52:33.687736 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:52:33.687746 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:52:33.687755 | orchestrator | 2026-04-06 05:52:33.687765 | orchestrator | TASK [neutron : Checking neutron pending contract scripts] ********************* 2026-04-06 05:52:33.687775 | orchestrator | Monday 06 April 2026 05:52:04 +0000 (0:01:07.219) 0:05:12.216 ********** 2026-04-06 05:52:33.687784 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:52:33.687794 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:52:33.687803 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:52:33.687813 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:52:33.687822 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:52:33.687832 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:52:33.687841 | orchestrator | 2026-04-06 05:52:33.687851 | orchestrator | TASK [neutron : Stopping all neutron-server for contract db] ******************* 2026-04-06 05:52:33.687861 | orchestrator | Monday 06 April 2026 05:52:06 +0000 (0:00:02.014) 0:05:14.231 ********** 2026-04-06 05:52:33.687870 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:52:33.687900 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:52:33.687909 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:52:33.687919 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:52:33.687929 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:52:33.687938 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:52:33.687948 | orchestrator | 2026-04-06 05:52:33.687958 | orchestrator | TASK [neutron : Running Neutron database contract container] ******************* 2026-04-06 05:52:33.687968 | orchestrator | Monday 06 April 2026 05:52:11 +0000 (0:00:05.207) 0:05:19.438 ********** 2026-04-06 05:52:33.687978 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:52:33.687987 | orchestrator | 2026-04-06 05:52:33.687997 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.688034 | orchestrator | Monday 06 April 2026 05:52:27 +0000 (0:00:16.107) 0:05:35.546 ********** 2026-04-06 05:52:33.688046 | orchestrator | 2026-04-06 05:52:33.688056 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.688065 | orchestrator | Monday 06 April 2026 05:52:28 +0000 (0:00:00.444) 0:05:35.990 ********** 2026-04-06 05:52:33.688075 | orchestrator | 2026-04-06 05:52:33.688085 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.688095 | orchestrator | Monday 06 April 2026 05:52:28 +0000 (0:00:00.475) 0:05:36.465 ********** 2026-04-06 05:52:33.688104 | orchestrator | 2026-04-06 05:52:33.688114 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.688124 | orchestrator | Monday 06 April 2026 05:52:29 +0000 (0:00:00.456) 0:05:36.922 ********** 2026-04-06 05:52:33.688141 | orchestrator | 2026-04-06 05:52:33.688151 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.688161 | orchestrator | Monday 06 April 2026 05:52:29 +0000 (0:00:00.507) 0:05:37.429 ********** 2026-04-06 05:52:33.688171 | orchestrator | 2026-04-06 05:52:33.688180 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-06 05:52:33.688190 | orchestrator | Monday 06 April 2026 05:52:30 +0000 (0:00:00.418) 0:05:37.848 ********** 2026-04-06 05:52:33.688200 | orchestrator | 2026-04-06 05:52:33.688210 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-06 05:52:33.688219 | orchestrator | Monday 06 April 2026 05:52:30 +0000 (0:00:00.776) 0:05:38.625 ********** 2026-04-06 05:52:33.688229 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:52:33.688239 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:52:33.688249 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:52:33.688258 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:52:33.688268 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:52:33.688278 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:52:33.688287 | orchestrator | 2026-04-06 05:52:33.688297 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 05:52:33.688309 | orchestrator | testbed-node-0 : ok=21  changed=8  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-06 05:52:33.688320 | orchestrator | testbed-node-1 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-06 05:52:33.688330 | orchestrator | testbed-node-2 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-06 05:52:33.688340 | orchestrator | testbed-node-3 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-06 05:52:33.688350 | orchestrator | testbed-node-4 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-06 05:52:33.688360 | orchestrator | testbed-node-5 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-06 05:52:33.688372 | orchestrator | 2026-04-06 05:52:33.688388 | orchestrator | 2026-04-06 05:52:33.688404 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 05:52:33.688419 | orchestrator | Monday 06 April 2026 05:52:33 +0000 (0:00:02.817) 0:05:41.443 ********** 2026-04-06 05:52:33.688436 | orchestrator | =============================================================================== 2026-04-06 05:52:33.688450 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 67.22s 2026-04-06 05:52:33.688465 | orchestrator | neutron : Restart neutron-server container ----------------------------- 43.82s 2026-04-06 05:52:33.688480 | orchestrator | neutron : Running Neutron database expand container -------------------- 34.71s 2026-04-06 05:52:33.688496 | orchestrator | neutron : Running Neutron database contract container ------------------ 16.11s 2026-04-06 05:52:33.688511 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.67s 2026-04-06 05:52:33.688527 | orchestrator | neutron : Stopping all neutron-server for contract db ------------------- 5.21s 2026-04-06 05:52:33.688543 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.20s 2026-04-06 05:52:33.688558 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.11s 2026-04-06 05:52:33.688574 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.14s 2026-04-06 05:52:33.688590 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.06s 2026-04-06 05:52:33.688605 | orchestrator | service-check-containers : neutron | Check containers ------------------- 4.01s 2026-04-06 05:52:33.688622 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.00s 2026-04-06 05:52:33.688650 | orchestrator | neutron : include_tasks ------------------------------------------------- 4.00s 2026-04-06 05:52:33.688665 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.87s 2026-04-06 05:52:33.688681 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.78s 2026-04-06 05:52:33.688696 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 3.76s 2026-04-06 05:52:33.688713 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.70s 2026-04-06 05:52:33.688730 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 3.68s 2026-04-06 05:52:33.688746 | orchestrator | Setting sysctl values --------------------------------------------------- 3.68s 2026-04-06 05:52:33.688780 | orchestrator | Load and persist kernel modules ----------------------------------------- 3.60s 2026-04-06 05:52:34.273992 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-06 05:52:34.274108 | orchestrator | + osism apply -a reconfigure nova 2026-04-06 05:52:35.670942 | orchestrator | 2026-04-06 05:52:35 | INFO  | Prepare task for execution of nova. 2026-04-06 05:52:35.744268 | orchestrator | 2026-04-06 05:52:35 | INFO  | Task 2fd1a9d1-4a36-42e6-b01b-edff3c961366 (nova) was prepared for execution. 2026-04-06 05:52:35.744366 | orchestrator | 2026-04-06 05:52:35 | INFO  | It takes a moment until task 2fd1a9d1-4a36-42e6-b01b-edff3c961366 (nova) has been started and output is visible here. 2026-04-06 05:54:57.310421 | orchestrator | 2026-04-06 05:54:57.310539 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 05:54:57.310556 | orchestrator | 2026-04-06 05:54:57.310568 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-06 05:54:57.310580 | orchestrator | Monday 06 April 2026 05:52:41 +0000 (0:00:01.798) 0:00:01.798 ********** 2026-04-06 05:54:57.310591 | orchestrator | changed: [testbed-manager] 2026-04-06 05:54:57.310603 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:54:57.310614 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:54:57.310625 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:54:57.310636 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:54:57.310647 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:54:57.310657 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:54:57.310668 | orchestrator | 2026-04-06 05:54:57.310680 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 05:54:57.310691 | orchestrator | Monday 06 April 2026 05:52:44 +0000 (0:00:03.592) 0:00:05.391 ********** 2026-04-06 05:54:57.310701 | orchestrator | changed: [testbed-manager] 2026-04-06 05:54:57.310712 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:54:57.310723 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:54:57.310734 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:54:57.310745 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:54:57.310755 | orchestrator | changed: [testbed-node-4] 2026-04-06 05:54:57.310766 | orchestrator | changed: [testbed-node-5] 2026-04-06 05:54:57.310777 | orchestrator | 2026-04-06 05:54:57.310788 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 05:54:57.310799 | orchestrator | Monday 06 April 2026 05:52:46 +0000 (0:00:02.356) 0:00:07.747 ********** 2026-04-06 05:54:57.310810 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-06 05:54:57.310821 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-06 05:54:57.310832 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-06 05:54:57.310843 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-06 05:54:57.310880 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-06 05:54:57.310892 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-06 05:54:57.310903 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-06 05:54:57.310914 | orchestrator | 2026-04-06 05:54:57.310925 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-06 05:54:57.310958 | orchestrator | 2026-04-06 05:54:57.310971 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-06 05:54:57.310984 | orchestrator | Monday 06 April 2026 05:52:49 +0000 (0:00:02.766) 0:00:10.514 ********** 2026-04-06 05:54:57.310997 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:54:57.311010 | orchestrator | 2026-04-06 05:54:57.311023 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-06 05:54:57.311036 | orchestrator | Monday 06 April 2026 05:52:52 +0000 (0:00:03.068) 0:00:13.582 ********** 2026-04-06 05:54:57.311050 | orchestrator | ok: [testbed-node-0] => (item=nova_cell0) 2026-04-06 05:54:57.311063 | orchestrator | ok: [testbed-node-0] => (item=nova_api) 2026-04-06 05:54:57.311076 | orchestrator | 2026-04-06 05:54:57.311088 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-06 05:54:57.311101 | orchestrator | Monday 06 April 2026 05:52:58 +0000 (0:00:05.489) 0:00:19.072 ********** 2026-04-06 05:54:57.311115 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-06 05:54:57.311128 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-06 05:54:57.311140 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.311153 | orchestrator | 2026-04-06 05:54:57.311167 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-06 05:54:57.311179 | orchestrator | Monday 06 April 2026 05:53:03 +0000 (0:00:05.502) 0:00:24.574 ********** 2026-04-06 05:54:57.311192 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.311205 | orchestrator | 2026-04-06 05:54:57.311218 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-06 05:54:57.311231 | orchestrator | Monday 06 April 2026 05:53:05 +0000 (0:00:01.655) 0:00:26.230 ********** 2026-04-06 05:54:57.311243 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.311256 | orchestrator | 2026-04-06 05:54:57.311268 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-06 05:54:57.311281 | orchestrator | Monday 06 April 2026 05:53:07 +0000 (0:00:02.193) 0:00:28.424 ********** 2026-04-06 05:54:57.311294 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:54:57.311306 | orchestrator | 2026-04-06 05:54:57.311318 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-06 05:54:57.311331 | orchestrator | Monday 06 April 2026 05:53:11 +0000 (0:00:03.868) 0:00:32.292 ********** 2026-04-06 05:54:57.311344 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:54:57.311357 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.311368 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.311379 | orchestrator | 2026-04-06 05:54:57.311390 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-06 05:54:57.311401 | orchestrator | Monday 06 April 2026 05:53:13 +0000 (0:00:01.775) 0:00:34.067 ********** 2026-04-06 05:54:57.311411 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.311422 | orchestrator | 2026-04-06 05:54:57.311433 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-06 05:54:57.311457 | orchestrator | Monday 06 April 2026 05:53:46 +0000 (0:00:33.700) 0:01:07.768 ********** 2026-04-06 05:54:57.311469 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.311479 | orchestrator | 2026-04-06 05:54:57.311490 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-06 05:54:57.311501 | orchestrator | Monday 06 April 2026 05:54:02 +0000 (0:00:15.597) 0:01:23.366 ********** 2026-04-06 05:54:57.311513 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.311524 | orchestrator | 2026-04-06 05:54:57.311534 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-06 05:54:57.311546 | orchestrator | Monday 06 April 2026 05:54:17 +0000 (0:00:15.071) 0:01:38.437 ********** 2026-04-06 05:54:57.311556 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.311567 | orchestrator | 2026-04-06 05:54:57.311594 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-06 05:54:57.311605 | orchestrator | Monday 06 April 2026 05:54:19 +0000 (0:00:01.988) 0:01:40.426 ********** 2026-04-06 05:54:57.311625 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:54:57.311636 | orchestrator | 2026-04-06 05:54:57.311647 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-06 05:54:57.311658 | orchestrator | Monday 06 April 2026 05:54:21 +0000 (0:00:01.689) 0:01:42.116 ********** 2026-04-06 05:54:57.311668 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:54:57.311679 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.311690 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.311701 | orchestrator | 2026-04-06 05:54:57.311712 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-06 05:54:57.311722 | orchestrator | Monday 06 April 2026 05:54:22 +0000 (0:00:01.423) 0:01:43.540 ********** 2026-04-06 05:54:57.311733 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:54:57.311744 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.311754 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.311765 | orchestrator | 2026-04-06 05:54:57.311776 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-06 05:54:57.311787 | orchestrator | 2026-04-06 05:54:57.311798 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-06 05:54:57.311808 | orchestrator | Monday 06 April 2026 05:54:24 +0000 (0:00:01.762) 0:01:45.302 ********** 2026-04-06 05:54:57.311819 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:54:57.311830 | orchestrator | 2026-04-06 05:54:57.311841 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-06 05:54:57.311852 | orchestrator | Monday 06 April 2026 05:54:26 +0000 (0:00:01.772) 0:01:47.074 ********** 2026-04-06 05:54:57.311900 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.311911 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.311922 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.311933 | orchestrator | 2026-04-06 05:54:57.311944 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-06 05:54:57.311954 | orchestrator | Monday 06 April 2026 05:54:29 +0000 (0:00:02.931) 0:01:50.005 ********** 2026-04-06 05:54:57.311965 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.311976 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.311987 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.311997 | orchestrator | 2026-04-06 05:54:57.312008 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-06 05:54:57.312019 | orchestrator | Monday 06 April 2026 05:54:32 +0000 (0:00:03.527) 0:01:53.533 ********** 2026-04-06 05:54:57.312030 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-06 05:54:57.312040 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.312051 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-06 05:54:57.312062 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.312072 | orchestrator | ok: [testbed-node-0] => (item=openstack) 2026-04-06 05:54:57.312083 | orchestrator | 2026-04-06 05:54:57.312094 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-06 05:54:57.312105 | orchestrator | Monday 06 April 2026 05:54:37 +0000 (0:00:04.828) 0:01:58.361 ********** 2026-04-06 05:54:57.312116 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-06 05:54:57.312126 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.312137 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-06 05:54:57.312148 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.312159 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-06 05:54:57.312169 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-06 05:54:57.312180 | orchestrator | 2026-04-06 05:54:57.312191 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-06 05:54:57.312202 | orchestrator | Monday 06 April 2026 05:54:50 +0000 (0:00:12.573) 0:02:10.935 ********** 2026-04-06 05:54:57.312212 | orchestrator | skipping: [testbed-node-0] => (item=openstack)  2026-04-06 05:54:57.312230 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:54:57.312241 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-06 05:54:57.312251 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.312262 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-06 05:54:57.312273 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.312283 | orchestrator | 2026-04-06 05:54:57.312294 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-06 05:54:57.312305 | orchestrator | Monday 06 April 2026 05:54:51 +0000 (0:00:01.589) 0:02:12.525 ********** 2026-04-06 05:54:57.312316 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-06 05:54:57.312326 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:54:57.312337 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-06 05:54:57.312348 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.312358 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-06 05:54:57.312369 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.312380 | orchestrator | 2026-04-06 05:54:57.312391 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-06 05:54:57.312402 | orchestrator | Monday 06 April 2026 05:54:53 +0000 (0:00:01.860) 0:02:14.386 ********** 2026-04-06 05:54:57.312412 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.312428 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.312440 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.312450 | orchestrator | 2026-04-06 05:54:57.312461 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-06 05:54:57.312472 | orchestrator | Monday 06 April 2026 05:54:55 +0000 (0:00:01.582) 0:02:15.968 ********** 2026-04-06 05:54:57.312483 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:54:57.312493 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:54:57.312504 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:54:57.312515 | orchestrator | 2026-04-06 05:54:57.312526 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-06 05:54:57.312537 | orchestrator | Monday 06 April 2026 05:54:57 +0000 (0:00:01.890) 0:02:17.858 ********** 2026-04-06 05:54:57.312554 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:25.259581 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:25.259699 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:56:25.259715 | orchestrator | 2026-04-06 05:56:25.259728 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-06 05:56:25.259740 | orchestrator | Monday 06 April 2026 05:55:00 +0000 (0:00:03.711) 0:02:21.570 ********** 2026-04-06 05:56:25.259751 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:25.259762 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:25.259773 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:56:25.259784 | orchestrator | 2026-04-06 05:56:25.259795 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-06 05:56:25.259806 | orchestrator | Monday 06 April 2026 05:55:13 +0000 (0:00:12.934) 0:02:34.504 ********** 2026-04-06 05:56:25.259817 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:25.259828 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:25.259839 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:56:25.259909 | orchestrator | 2026-04-06 05:56:25.259922 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-06 05:56:25.259933 | orchestrator | Monday 06 April 2026 05:55:26 +0000 (0:00:13.236) 0:02:47.741 ********** 2026-04-06 05:56:25.259944 | orchestrator | ok: [testbed-node-0] 2026-04-06 05:56:25.259955 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:25.259965 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:25.259976 | orchestrator | 2026-04-06 05:56:25.259987 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-06 05:56:25.259998 | orchestrator | Monday 06 April 2026 05:55:29 +0000 (0:00:02.293) 0:02:50.034 ********** 2026-04-06 05:56:25.260009 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:56:25.260045 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:25.260056 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:25.260067 | orchestrator | 2026-04-06 05:56:25.260078 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-06 05:56:25.260089 | orchestrator | Monday 06 April 2026 05:55:31 +0000 (0:00:02.005) 0:02:52.039 ********** 2026-04-06 05:56:25.260100 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:25.260111 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:25.260121 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:56:25.260134 | orchestrator | 2026-04-06 05:56:25.260146 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-06 05:56:25.260159 | orchestrator | Monday 06 April 2026 05:55:44 +0000 (0:00:13.583) 0:03:05.623 ********** 2026-04-06 05:56:25.260172 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:56:25.260183 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:25.260195 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:25.260208 | orchestrator | 2026-04-06 05:56:25.260221 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-06 05:56:25.260234 | orchestrator | 2026-04-06 05:56:25.260247 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-06 05:56:25.260260 | orchestrator | Monday 06 April 2026 05:55:46 +0000 (0:00:01.623) 0:03:07.246 ********** 2026-04-06 05:56:25.260273 | orchestrator | included: /ansible/roles/nova/tasks/reconfigure.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:56:25.260287 | orchestrator | 2026-04-06 05:56:25.260300 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-04-06 05:56:25.260313 | orchestrator | Monday 06 April 2026 05:55:48 +0000 (0:00:01.944) 0:03:09.191 ********** 2026-04-06 05:56:25.260326 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-06 05:56:25.260339 | orchestrator | ok: [testbed-node-0] => (item=nova (compute)) 2026-04-06 05:56:25.260350 | orchestrator | 2026-04-06 05:56:25.260361 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-04-06 05:56:25.260372 | orchestrator | Monday 06 April 2026 05:55:52 +0000 (0:00:04.477) 0:03:13.668 ********** 2026-04-06 05:56:25.260383 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-06 05:56:25.260395 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-06 05:56:25.260406 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-06 05:56:25.260417 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-06 05:56:25.260427 | orchestrator | 2026-04-06 05:56:25.260438 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-06 05:56:25.260449 | orchestrator | Monday 06 April 2026 05:56:00 +0000 (0:00:07.653) 0:03:21.322 ********** 2026-04-06 05:56:25.260460 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-06 05:56:25.260470 | orchestrator | 2026-04-06 05:56:25.260481 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-06 05:56:25.260492 | orchestrator | Monday 06 April 2026 05:56:04 +0000 (0:00:04.229) 0:03:25.552 ********** 2026-04-06 05:56:25.260502 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-06 05:56:25.260513 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-06 05:56:25.260524 | orchestrator | 2026-04-06 05:56:25.260549 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-06 05:56:25.260560 | orchestrator | Monday 06 April 2026 05:56:10 +0000 (0:00:05.779) 0:03:31.331 ********** 2026-04-06 05:56:25.260571 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-06 05:56:25.260582 | orchestrator | 2026-04-06 05:56:25.260592 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-04-06 05:56:25.260603 | orchestrator | Monday 06 April 2026 05:56:14 +0000 (0:00:04.341) 0:03:35.673 ********** 2026-04-06 05:56:25.260621 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-06 05:56:25.260632 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> service) 2026-04-06 05:56:25.260643 | orchestrator | 2026-04-06 05:56:25.260671 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-06 05:56:25.260682 | orchestrator | Monday 06 April 2026 05:56:23 +0000 (0:00:08.636) 0:03:44.309 ********** 2026-04-06 05:56:25.260699 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:25.260716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:25.260730 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:25.260757 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:36.594908 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:36.595029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:36.595049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:36.595081 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:36.595117 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:36.595130 | orchestrator | 2026-04-06 05:56:36.595143 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-06 05:56:36.595156 | orchestrator | Monday 06 April 2026 05:56:27 +0000 (0:00:03.548) 0:03:47.858 ********** 2026-04-06 05:56:36.595183 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:56:36.595196 | orchestrator | 2026-04-06 05:56:36.595207 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-06 05:56:36.595218 | orchestrator | Monday 06 April 2026 05:56:28 +0000 (0:00:01.124) 0:03:48.982 ********** 2026-04-06 05:56:36.595229 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:56:36.595240 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:36.595250 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:36.595261 | orchestrator | 2026-04-06 05:56:36.595272 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-06 05:56:36.595282 | orchestrator | Monday 06 April 2026 05:56:29 +0000 (0:00:01.362) 0:03:50.344 ********** 2026-04-06 05:56:36.595293 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 05:56:36.595304 | orchestrator | 2026-04-06 05:56:36.595314 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-06 05:56:36.595325 | orchestrator | Monday 06 April 2026 05:56:31 +0000 (0:00:02.194) 0:03:52.539 ********** 2026-04-06 05:56:36.595336 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:56:36.595347 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:36.595357 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:36.595368 | orchestrator | 2026-04-06 05:56:36.595379 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-06 05:56:36.595389 | orchestrator | Monday 06 April 2026 05:56:33 +0000 (0:00:01.358) 0:03:53.897 ********** 2026-04-06 05:56:36.595401 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:56:36.595412 | orchestrator | 2026-04-06 05:56:36.595423 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-06 05:56:36.595434 | orchestrator | Monday 06 April 2026 05:56:35 +0000 (0:00:01.937) 0:03:55.835 ********** 2026-04-06 05:56:36.595446 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:36.595472 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:36.595494 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:40.093922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:40.094088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:40.094146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:40.094163 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:40.094196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:40.094209 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:40.094222 | orchestrator | 2026-04-06 05:56:40.094235 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-06 05:56:40.094247 | orchestrator | Monday 06 April 2026 05:56:39 +0000 (0:00:04.238) 0:04:00.073 ********** 2026-04-06 05:56:40.094260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:40.094288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:40.094302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:56:40.094314 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:56:40.094337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:41.936319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:41.936452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:56:41.936470 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:41.936500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:41.936515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:41.936545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:56:41.936558 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:41.936577 | orchestrator | 2026-04-06 05:56:41.936589 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-06 05:56:41.936602 | orchestrator | Monday 06 April 2026 05:56:41 +0000 (0:00:02.003) 0:04:02.077 ********** 2026-04-06 05:56:41.936614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:41.936632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:41.936645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:56:41.936656 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:56:41.936675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:45.439476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:45.439589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:56:45.439602 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:56:45.439613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:45.439624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:45.439664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:56:45.439673 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:56:45.439681 | orchestrator | 2026-04-06 05:56:45.439689 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-06 05:56:45.439710 | orchestrator | Monday 06 April 2026 05:56:43 +0000 (0:00:01.831) 0:04:03.909 ********** 2026-04-06 05:56:45.439737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:45.439747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:45.439756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:45.439776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:53.786742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:53.786925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:53.786961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:53.787015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:53.787034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:53.787050 | orchestrator | 2026-04-06 05:56:53.787067 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-06 05:56:53.787105 | orchestrator | Monday 06 April 2026 05:56:47 +0000 (0:00:04.705) 0:04:08.615 ********** 2026-04-06 05:56:53.787153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:53.787174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:53.787187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:53.787221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:58.526468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:58.526565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:56:58.526607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:58.526623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:58.526680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:56:58.526734 | orchestrator | 2026-04-06 05:56:58.526754 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-06 05:56:58.526784 | orchestrator | Monday 06 April 2026 05:56:57 +0000 (0:00:10.097) 0:04:18.712 ********** 2026-04-06 05:56:58.526803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:58.526816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:58.526838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:56:58.526875 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:56:58.526888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:56:58.526913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:57:16.428167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:57:16.428283 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:57:16.428305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:57:16.428345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:57:16.428359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:57:16.428371 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:57:16.428383 | orchestrator | 2026-04-06 05:57:16.428395 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-06 05:57:16.428407 | orchestrator | Monday 06 April 2026 05:56:59 +0000 (0:00:01.762) 0:04:20.475 ********** 2026-04-06 05:57:16.428418 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:57:16.428429 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:57:16.428440 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:57:16.428451 | orchestrator | 2026-04-06 05:57:16.428462 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-06 05:57:16.428486 | orchestrator | Monday 06 April 2026 05:57:01 +0000 (0:00:01.732) 0:04:22.207 ********** 2026-04-06 05:57:16.428497 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:57:16.428508 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:57:16.428519 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:57:16.428530 | orchestrator | 2026-04-06 05:57:16.428541 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-06 05:57:16.428569 | orchestrator | Monday 06 April 2026 05:57:03 +0000 (0:00:02.082) 0:04:24.290 ********** 2026-04-06 05:57:16.428581 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-06 05:57:16.428593 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-06 05:57:16.428612 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:57:16.428623 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-06 05:57:16.428634 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-06 05:57:16.428645 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:57:16.428655 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-06 05:57:16.428666 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-06 05:57:16.428678 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:57:16.428689 | orchestrator | 2026-04-06 05:57:16.428702 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-06 05:57:16.428714 | orchestrator | Monday 06 April 2026 05:57:05 +0000 (0:00:01.689) 0:04:25.979 ********** 2026-04-06 05:57:16.428728 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-06 05:57:16.428742 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-06 05:57:16.428755 | orchestrator | 2026-04-06 05:57:16.428768 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-06 05:57:16.428780 | orchestrator | Monday 06 April 2026 05:57:07 +0000 (0:00:02.567) 0:04:28.547 ********** 2026-04-06 05:57:16.428793 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:57:16.428805 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:57:16.428818 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:57:16.428830 | orchestrator | 2026-04-06 05:57:16.428843 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-06 05:57:16.428880 | orchestrator | Monday 06 April 2026 05:57:11 +0000 (0:00:03.380) 0:04:31.928 ********** 2026-04-06 05:57:16.428893 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:57:16.428905 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:57:16.428917 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:57:16.428929 | orchestrator | 2026-04-06 05:57:16.428942 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-06 05:57:16.428954 | orchestrator | Monday 06 April 2026 05:57:14 +0000 (0:00:03.467) 0:04:35.395 ********** 2026-04-06 05:57:16.428969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:57:16.428992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:57:16.429025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:57:20.787030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:57:20.787161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:57:20.787212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 05:57:20.787267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:57:20.787315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:57:20.787330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 05:57:20.787342 | orchestrator | 2026-04-06 05:57:20.787355 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-06 05:57:20.787368 | orchestrator | Monday 06 April 2026 05:57:18 +0000 (0:00:04.312) 0:04:39.708 ********** 2026-04-06 05:57:20.787380 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 05:57:20.787393 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:57:20.787405 | orchestrator | } 2026-04-06 05:57:20.787416 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 05:57:20.787426 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:57:20.787439 | orchestrator | } 2026-04-06 05:57:20.787452 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 05:57:20.787465 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 05:57:20.787477 | orchestrator | } 2026-04-06 05:57:20.787489 | orchestrator | 2026-04-06 05:57:20.787502 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 05:57:20.787515 | orchestrator | Monday 06 April 2026 05:57:20 +0000 (0:00:01.369) 0:04:41.078 ********** 2026-04-06 05:57:20.787529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:57:20.787561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:57:20.787585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:58:58.145656 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:58:58.145777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:58:58.145800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:58:58.145851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:58:58.145934 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:58:58.145947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:58:58.145980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 05:58:58.146176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 05:58:58.146208 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:58:58.146222 | orchestrator | 2026-04-06 05:58:58.146237 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-06 05:58:58.146250 | orchestrator | Monday 06 April 2026 05:57:22 +0000 (0:00:02.199) 0:04:43.277 ********** 2026-04-06 05:58:58.146263 | orchestrator | 2026-04-06 05:58:58.146276 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-06 05:58:58.146289 | orchestrator | Monday 06 April 2026 05:57:23 +0000 (0:00:00.761) 0:04:44.039 ********** 2026-04-06 05:58:58.146301 | orchestrator | 2026-04-06 05:58:58.146313 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-06 05:58:58.146325 | orchestrator | Monday 06 April 2026 05:57:23 +0000 (0:00:00.507) 0:04:44.546 ********** 2026-04-06 05:58:58.146338 | orchestrator | 2026-04-06 05:58:58.146350 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-06 05:58:58.146363 | orchestrator | Monday 06 April 2026 05:57:24 +0000 (0:00:00.920) 0:04:45.467 ********** 2026-04-06 05:58:58.146376 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:58:58.146388 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:58:58.146401 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:58:58.146414 | orchestrator | 2026-04-06 05:58:58.146426 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-06 05:58:58.146438 | orchestrator | Monday 06 April 2026 05:57:51 +0000 (0:00:26.446) 0:05:11.913 ********** 2026-04-06 05:58:58.146451 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:58:58.146463 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:58:58.146476 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:58:58.146489 | orchestrator | 2026-04-06 05:58:58.146501 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-04-06 05:58:58.146514 | orchestrator | Monday 06 April 2026 05:58:04 +0000 (0:00:13.681) 0:05:25.594 ********** 2026-04-06 05:58:58.146535 | orchestrator | changed: [testbed-node-0] 2026-04-06 05:58:58.146548 | orchestrator | changed: [testbed-node-1] 2026-04-06 05:58:58.146560 | orchestrator | changed: [testbed-node-2] 2026-04-06 05:58:58.146573 | orchestrator | 2026-04-06 05:58:58.146584 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-06 05:58:58.146595 | orchestrator | 2026-04-06 05:58:58.146606 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 05:58:58.146617 | orchestrator | Monday 06 April 2026 05:58:15 +0000 (0:00:11.042) 0:05:36.637 ********** 2026-04-06 05:58:58.146628 | orchestrator | included: /ansible/roles/nova-cell/tasks/reconfigure.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:58:58.146640 | orchestrator | 2026-04-06 05:58:58.146651 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 05:58:58.146662 | orchestrator | Monday 06 April 2026 05:58:18 +0000 (0:00:02.545) 0:05:39.183 ********** 2026-04-06 05:58:58.146672 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:58:58.146683 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:58:58.146694 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:58:58.146704 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:58:58.146715 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:58:58.146726 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:58:58.146736 | orchestrator | 2026-04-06 05:58:58.146747 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-06 05:58:58.146758 | orchestrator | Monday 06 April 2026 05:58:20 +0000 (0:00:02.356) 0:05:41.539 ********** 2026-04-06 05:58:58.146768 | orchestrator | changed: [testbed-node-3] 2026-04-06 05:58:58.146779 | orchestrator | 2026-04-06 05:58:58.146790 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-06 05:58:58.146801 | orchestrator | Monday 06 April 2026 05:58:56 +0000 (0:00:35.869) 0:06:17.408 ********** 2026-04-06 05:58:58.146818 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:58:58.146831 | orchestrator | 2026-04-06 05:58:58.146880 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-06 05:59:50.601771 | orchestrator | Monday 06 April 2026 05:58:59 +0000 (0:00:02.542) 0:06:19.951 ********** 2026-04-06 05:59:50.601936 | orchestrator | included: service-image-info for testbed-node-3 2026-04-06 05:59:50.601954 | orchestrator | 2026-04-06 05:59:50.601966 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-06 05:59:50.601978 | orchestrator | Monday 06 April 2026 05:59:01 +0000 (0:00:02.066) 0:06:22.017 ********** 2026-04-06 05:59:50.601989 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:59:50.602000 | orchestrator | 2026-04-06 05:59:50.602068 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-06 05:59:50.602082 | orchestrator | Monday 06 April 2026 05:59:05 +0000 (0:00:04.346) 0:06:26.363 ********** 2026-04-06 05:59:50.602094 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:59:50.602105 | orchestrator | 2026-04-06 05:59:50.602126 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-06 05:59:50.602137 | orchestrator | Monday 06 April 2026 05:59:08 +0000 (0:00:02.930) 0:06:29.293 ********** 2026-04-06 05:59:50.602148 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:59:50.602160 | orchestrator | 2026-04-06 05:59:50.602172 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-06 05:59:50.602183 | orchestrator | Monday 06 April 2026 05:59:11 +0000 (0:00:03.059) 0:06:32.353 ********** 2026-04-06 05:59:50.602194 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:59:50.602205 | orchestrator | 2026-04-06 05:59:50.602216 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-06 05:59:50.602227 | orchestrator | Monday 06 April 2026 05:59:14 +0000 (0:00:03.176) 0:06:35.530 ********** 2026-04-06 05:59:50.602238 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:59:50.602249 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:59:50.602261 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:59:50.602272 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:59:50.602283 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:59:50.602294 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:59:50.602305 | orchestrator | 2026-04-06 05:59:50.602318 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-06 05:59:50.602331 | orchestrator | Monday 06 April 2026 05:59:20 +0000 (0:00:05.392) 0:06:40.922 ********** 2026-04-06 05:59:50.602344 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:59:50.602357 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:59:50.602370 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:59:50.602382 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:59:50.602395 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:59:50.602408 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:59:50.602421 | orchestrator | 2026-04-06 05:59:50.602434 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-06 05:59:50.602447 | orchestrator | Monday 06 April 2026 05:59:26 +0000 (0:00:05.880) 0:06:46.803 ********** 2026-04-06 05:59:50.602459 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:59:50.602472 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:59:50.602484 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:59:50.602498 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 05:59:50.602511 | orchestrator |  "changed": false, 2026-04-06 05:59:50.602524 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-06 05:59:50.602538 | orchestrator | } 2026-04-06 05:59:50.602550 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 05:59:50.602564 | orchestrator |  "changed": false, 2026-04-06 05:59:50.602577 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-06 05:59:50.602590 | orchestrator | } 2026-04-06 05:59:50.602602 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 05:59:50.602615 | orchestrator |  "changed": false, 2026-04-06 05:59:50.602655 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-06 05:59:50.602667 | orchestrator | } 2026-04-06 05:59:50.602678 | orchestrator | 2026-04-06 05:59:50.602689 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-06 05:59:50.602700 | orchestrator | Monday 06 April 2026 05:59:33 +0000 (0:00:07.634) 0:06:54.437 ********** 2026-04-06 05:59:50.602711 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:59:50.602736 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:59:50.602748 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:59:50.602759 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 05:59:50.602770 | orchestrator | 2026-04-06 05:59:50.602781 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-06 05:59:50.602792 | orchestrator | Monday 06 April 2026 05:59:35 +0000 (0:00:02.246) 0:06:56.684 ********** 2026-04-06 05:59:50.602803 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-06 05:59:50.602815 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-06 05:59:50.602825 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-06 05:59:50.602836 | orchestrator | 2026-04-06 05:59:50.602854 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-06 05:59:50.602917 | orchestrator | Monday 06 April 2026 05:59:37 +0000 (0:00:01.667) 0:06:58.352 ********** 2026-04-06 05:59:50.602936 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-06 05:59:50.602952 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-06 05:59:50.602963 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-06 05:59:50.602973 | orchestrator | 2026-04-06 05:59:50.602991 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-06 05:59:50.603009 | orchestrator | Monday 06 April 2026 05:59:39 +0000 (0:00:02.173) 0:07:00.525 ********** 2026-04-06 05:59:50.603029 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-06 05:59:50.603041 | orchestrator | skipping: [testbed-node-3] 2026-04-06 05:59:50.603051 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-06 05:59:50.603062 | orchestrator | skipping: [testbed-node-4] 2026-04-06 05:59:50.603073 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-06 05:59:50.603084 | orchestrator | skipping: [testbed-node-5] 2026-04-06 05:59:50.603094 | orchestrator | 2026-04-06 05:59:50.603105 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-06 05:59:50.603134 | orchestrator | Monday 06 April 2026 05:59:41 +0000 (0:00:01.384) 0:07:01.909 ********** 2026-04-06 05:59:50.603146 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 05:59:50.603157 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 05:59:50.603167 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:59:50.603178 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 05:59:50.603189 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 05:59:50.603200 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-06 05:59:50.603210 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-06 05:59:50.603221 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-06 05:59:50.603232 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:59:50.603243 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-06 05:59:50.603253 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 05:59:50.603264 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 05:59:50.603275 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:59:50.603285 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-06 05:59:50.603306 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-06 05:59:50.603317 | orchestrator | 2026-04-06 05:59:50.603328 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-06 05:59:50.603338 | orchestrator | Monday 06 April 2026 05:59:43 +0000 (0:00:02.380) 0:07:04.290 ********** 2026-04-06 05:59:50.603349 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:59:50.603360 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:59:50.603370 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:59:50.603381 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:59:50.603392 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:59:50.603403 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:59:50.603414 | orchestrator | 2026-04-06 05:59:50.603424 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-06 05:59:50.603435 | orchestrator | Monday 06 April 2026 05:59:45 +0000 (0:00:02.437) 0:07:06.727 ********** 2026-04-06 05:59:50.603446 | orchestrator | skipping: [testbed-node-0] 2026-04-06 05:59:50.603456 | orchestrator | skipping: [testbed-node-1] 2026-04-06 05:59:50.603467 | orchestrator | skipping: [testbed-node-2] 2026-04-06 05:59:50.603478 | orchestrator | ok: [testbed-node-3] 2026-04-06 05:59:50.603488 | orchestrator | ok: [testbed-node-4] 2026-04-06 05:59:50.603499 | orchestrator | ok: [testbed-node-5] 2026-04-06 05:59:50.603510 | orchestrator | 2026-04-06 05:59:50.603521 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-06 05:59:50.603531 | orchestrator | Monday 06 April 2026 05:59:49 +0000 (0:00:03.465) 0:07:10.193 ********** 2026-04-06 05:59:50.603552 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 05:59:50.603568 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 05:59:50.603589 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264386 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264487 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264504 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264533 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264547 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264559 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264613 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264627 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264638 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264656 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264667 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264679 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 05:59:54.264698 | orchestrator | 2026-04-06 05:59:54.264711 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 05:59:54.264723 | orchestrator | Monday 06 April 2026 05:59:52 +0000 (0:00:03.422) 0:07:13.615 ********** 2026-04-06 05:59:54.264741 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 05:59:58.386242 | orchestrator | 2026-04-06 05:59:58.386381 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-06 05:59:58.386398 | orchestrator | Monday 06 April 2026 05:59:55 +0000 (0:00:02.266) 0:07:15.882 ********** 2026-04-06 05:59:58.386415 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386431 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386459 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386473 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386528 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386543 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386556 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386569 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386587 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386600 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386620 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 05:59:58.386641 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:00:01.849497 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:00:01.849609 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:00:01.849645 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:00:01.849659 | orchestrator | 2026-04-06 06:00:01.849673 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-06 06:00:01.849685 | orchestrator | Monday 06 April 2026 05:59:59 +0000 (0:00:04.811) 0:07:20.694 ********** 2026-04-06 06:00:01.849721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:00:01.849736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:00:01.849769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:00:01.849781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:00:01.849793 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:00:01.849812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:00:01.849831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:00:01.849843 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:00:01.849854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:00:01.849949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:00:04.291930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:00:04.292038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:00:04.292055 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:00:04.292085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:00:04.292120 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:00:04.292133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:00:04.292146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:00:04.292158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:00:04.292169 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:00:04.292199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:00:04.292212 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:00:04.292223 | orchestrator | 2026-04-06 06:00:04.292235 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-06 06:00:04.292248 | orchestrator | Monday 06 April 2026 06:00:03 +0000 (0:00:03.374) 0:07:24.068 ********** 2026-04-06 06:00:04.292266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:00:04.292291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:00:04.292311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:00:04.292332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:00:04.292365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:00:13.280599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:00:13.280737 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:00:13.280759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:00:13.280774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:00:13.280786 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:00:13.280798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:00:13.280809 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:00:13.280821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:00:13.280854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:00:13.280933 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:00:13.280954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:00:13.280968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:00:13.280979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:00:13.280991 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:00:13.281003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:00:13.281014 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:00:13.281025 | orchestrator | 2026-04-06 06:00:13.281037 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 06:00:13.281050 | orchestrator | Monday 06 April 2026 06:00:06 +0000 (0:00:03.376) 0:07:27.444 ********** 2026-04-06 06:00:13.281061 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:00:13.281072 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:00:13.281084 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:00:13.281098 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 06:00:13.281112 | orchestrator | 2026-04-06 06:00:13.281124 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-06 06:00:13.281137 | orchestrator | Monday 06 April 2026 06:00:08 +0000 (0:00:02.096) 0:07:29.542 ********** 2026-04-06 06:00:13.281150 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:00:13.281163 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:00:13.281174 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:00:13.281187 | orchestrator | 2026-04-06 06:00:13.281200 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-06 06:00:13.281220 | orchestrator | Monday 06 April 2026 06:00:10 +0000 (0:00:02.062) 0:07:31.604 ********** 2026-04-06 06:00:13.281233 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:00:13.281246 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:00:13.281258 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:00:13.281271 | orchestrator | 2026-04-06 06:00:13.281284 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-06 06:00:13.281297 | orchestrator | Monday 06 April 2026 06:00:12 +0000 (0:00:02.127) 0:07:33.731 ********** 2026-04-06 06:00:13.281318 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:00:56.865830 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:00:56.866013 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:00:56.866087 | orchestrator | 2026-04-06 06:00:56.866101 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-06 06:00:56.866113 | orchestrator | Monday 06 April 2026 06:00:14 +0000 (0:00:01.659) 0:07:35.391 ********** 2026-04-06 06:00:56.866125 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:00:56.866136 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:00:56.866146 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:00:56.866158 | orchestrator | 2026-04-06 06:00:56.866169 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-06 06:00:56.866180 | orchestrator | Monday 06 April 2026 06:00:16 +0000 (0:00:01.839) 0:07:37.230 ********** 2026-04-06 06:00:56.866191 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-06 06:00:56.866203 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-06 06:00:56.866214 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-06 06:00:56.866225 | orchestrator | 2026-04-06 06:00:56.866236 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-06 06:00:56.866264 | orchestrator | Monday 06 April 2026 06:00:19 +0000 (0:00:03.008) 0:07:40.239 ********** 2026-04-06 06:00:56.866284 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-06 06:00:56.866301 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-06 06:00:56.866319 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-06 06:00:56.866339 | orchestrator | 2026-04-06 06:00:56.866359 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-06 06:00:56.866377 | orchestrator | Monday 06 April 2026 06:00:21 +0000 (0:00:02.209) 0:07:42.449 ********** 2026-04-06 06:00:56.866395 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-06 06:00:56.866407 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-06 06:00:56.866418 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-06 06:00:56.866429 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-06 06:00:56.866440 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-06 06:00:56.866451 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-06 06:00:56.866462 | orchestrator | 2026-04-06 06:00:56.866474 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-06 06:00:56.866485 | orchestrator | Monday 06 April 2026 06:00:26 +0000 (0:00:04.900) 0:07:47.349 ********** 2026-04-06 06:00:56.866497 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:00:56.866509 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:00:56.866520 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:00:56.866531 | orchestrator | 2026-04-06 06:00:56.866542 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-06 06:00:56.866554 | orchestrator | Monday 06 April 2026 06:00:28 +0000 (0:00:01.536) 0:07:48.886 ********** 2026-04-06 06:00:56.866565 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:00:56.866576 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:00:56.866587 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:00:56.866598 | orchestrator | 2026-04-06 06:00:56.866609 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-06 06:00:56.866621 | orchestrator | Monday 06 April 2026 06:00:29 +0000 (0:00:01.416) 0:07:50.302 ********** 2026-04-06 06:00:56.866656 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:00:56.866667 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:00:56.866678 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:00:56.866689 | orchestrator | 2026-04-06 06:00:56.866700 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-06 06:00:56.866711 | orchestrator | Monday 06 April 2026 06:00:32 +0000 (0:00:02.472) 0:07:52.775 ********** 2026-04-06 06:00:56.866724 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-06 06:00:56.866736 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-06 06:00:56.866747 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-06 06:00:56.866760 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-06 06:00:56.866771 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-06 06:00:56.866782 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-06 06:00:56.866793 | orchestrator | 2026-04-06 06:00:56.866805 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-06 06:00:56.866816 | orchestrator | Monday 06 April 2026 06:00:36 +0000 (0:00:04.485) 0:07:57.261 ********** 2026-04-06 06:00:56.866827 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-06 06:00:56.866838 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-06 06:00:56.866849 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-06 06:00:56.866860 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-06 06:00:56.866903 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:00:56.866933 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-06 06:00:56.866945 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:00:56.866956 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-06 06:00:56.866967 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:00:56.866978 | orchestrator | 2026-04-06 06:00:56.866989 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-06 06:00:56.867000 | orchestrator | Monday 06 April 2026 06:00:40 +0000 (0:00:04.437) 0:08:01.699 ********** 2026-04-06 06:00:56.867010 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:00:56.867021 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:00:56.867032 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:00:56.867043 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-4, testbed-node-5, testbed-node-3 2026-04-06 06:00:56.867054 | orchestrator | 2026-04-06 06:00:56.867065 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-06 06:00:56.867076 | orchestrator | Monday 06 April 2026 06:00:44 +0000 (0:00:03.340) 0:08:05.040 ********** 2026-04-06 06:00:56.867087 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:00:56.867105 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:00:56.867116 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:00:56.867127 | orchestrator | 2026-04-06 06:00:56.867138 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-06 06:00:56.867149 | orchestrator | Monday 06 April 2026 06:00:46 +0000 (0:00:02.352) 0:08:07.392 ********** 2026-04-06 06:00:56.867168 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:00:56.867180 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:00:56.867190 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:00:56.867201 | orchestrator | 2026-04-06 06:00:56.867212 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-06 06:00:56.867223 | orchestrator | Monday 06 April 2026 06:00:47 +0000 (0:00:01.366) 0:08:08.759 ********** 2026-04-06 06:00:56.867233 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:00:56.867244 | orchestrator | 2026-04-06 06:00:56.867255 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-06 06:00:56.867266 | orchestrator | Monday 06 April 2026 06:00:49 +0000 (0:00:01.165) 0:08:09.925 ********** 2026-04-06 06:00:56.867276 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:00:56.867287 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:00:56.867298 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:00:56.867310 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:00:56.867329 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:00:56.867347 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:00:56.867366 | orchestrator | 2026-04-06 06:00:56.867385 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-06 06:00:56.867406 | orchestrator | Monday 06 April 2026 06:00:51 +0000 (0:00:01.912) 0:08:11.837 ********** 2026-04-06 06:00:56.867424 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:00:56.867443 | orchestrator | 2026-04-06 06:00:56.867454 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-06 06:00:56.867465 | orchestrator | Monday 06 April 2026 06:00:52 +0000 (0:00:01.792) 0:08:13.630 ********** 2026-04-06 06:00:56.867476 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:00:56.867487 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:00:56.867498 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:00:56.867509 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:00:56.867520 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:00:56.867531 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:00:56.867542 | orchestrator | 2026-04-06 06:00:56.867553 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-06 06:00:56.867564 | orchestrator | Monday 06 April 2026 06:00:54 +0000 (0:00:01.851) 0:08:15.481 ********** 2026-04-06 06:00:56.867578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:00:56.867604 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:00:59.665826 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:00:59.665977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:00:59.665995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:00:59.666007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:00:59.666070 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:00:59.666086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:00:59.666181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:00:59.666216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:00:59.666237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:00:59.666258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:00:59.666272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:00:59.666284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:00:59.666319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:05.266658 | orchestrator | 2026-04-06 06:01:05.266759 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-06 06:01:05.266775 | orchestrator | Monday 06 April 2026 06:01:00 +0000 (0:00:06.053) 0:08:21.534 ********** 2026-04-06 06:01:05.266789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:01:05.266805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:01:05.266818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:01:05.266830 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:01:05.266922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:01:05.266984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:01:05.267000 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:05.267012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:05.267024 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:05.267045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:01:05.267071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:01:30.032061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:01:30.032179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:30.032196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:30.032209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:30.032249 | orchestrator | 2026-04-06 06:01:30.032263 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-06 06:01:30.032276 | orchestrator | Monday 06 April 2026 06:01:08 +0000 (0:00:07.834) 0:08:29.369 ********** 2026-04-06 06:01:30.032287 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:01:30.032299 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:01:30.032310 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:01:30.032321 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:30.032332 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:30.032343 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:30.032353 | orchestrator | 2026-04-06 06:01:30.032365 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-06 06:01:30.032392 | orchestrator | Monday 06 April 2026 06:01:11 +0000 (0:00:02.821) 0:08:32.191 ********** 2026-04-06 06:01:30.032404 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-06 06:01:30.032416 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-06 06:01:30.032426 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-06 06:01:30.032438 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-06 06:01:30.032449 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-06 06:01:30.032460 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-06 06:01:30.032472 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:30.032483 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-06 06:01:30.032495 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:30.032521 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-06 06:01:30.032533 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:30.032544 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-06 06:01:30.032572 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-06 06:01:30.032584 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-06 06:01:30.032599 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-06 06:01:30.032613 | orchestrator | 2026-04-06 06:01:30.032626 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-06 06:01:30.032639 | orchestrator | Monday 06 April 2026 06:01:17 +0000 (0:00:05.953) 0:08:38.145 ********** 2026-04-06 06:01:30.032653 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:01:30.032666 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:01:30.032680 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:01:30.032693 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:30.032706 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:30.032719 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:30.032732 | orchestrator | 2026-04-06 06:01:30.032745 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-06 06:01:30.032759 | orchestrator | Monday 06 April 2026 06:01:19 +0000 (0:00:01.855) 0:08:40.000 ********** 2026-04-06 06:01:30.032772 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-06 06:01:30.032784 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-06 06:01:30.032798 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-06 06:01:30.032820 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-06 06:01:30.032834 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-06 06:01:30.032847 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-06 06:01:30.032861 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-06 06:01:30.032915 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-06 06:01:30.032929 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-06 06:01:30.032943 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-06 06:01:30.032957 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:30.032968 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-06 06:01:30.032978 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:30.032989 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-06 06:01:30.033000 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:30.033012 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:01:30.033023 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:01:30.033034 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:01:30.033044 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:01:30.033055 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:01:30.033066 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:01:30.033077 | orchestrator | 2026-04-06 06:01:30.033088 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-06 06:01:30.033099 | orchestrator | Monday 06 April 2026 06:01:25 +0000 (0:00:06.625) 0:08:46.626 ********** 2026-04-06 06:01:30.033110 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 06:01:30.033122 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 06:01:30.033132 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 06:01:30.033143 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 06:01:30.033154 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-06 06:01:30.033165 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 06:01:30.033176 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 06:01:30.033193 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-06 06:01:30.033204 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-06 06:01:30.033216 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 06:01:30.033234 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 06:01:48.059331 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 06:01:48.059482 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-06 06:01:48.059500 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:48.059514 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 06:01:48.059525 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 06:01:48.059537 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-06 06:01:48.059547 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:48.059559 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-06 06:01:48.059569 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:48.059580 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 06:01:48.059591 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 06:01:48.059602 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 06:01:48.059613 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 06:01:48.059623 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 06:01:48.059634 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 06:01:48.059645 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 06:01:48.059656 | orchestrator | 2026-04-06 06:01:48.059667 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-06 06:01:48.059678 | orchestrator | Monday 06 April 2026 06:01:35 +0000 (0:00:09.599) 0:08:56.226 ********** 2026-04-06 06:01:48.059689 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:01:48.059700 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:01:48.059711 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:01:48.059721 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:48.059732 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:48.059743 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:48.059754 | orchestrator | 2026-04-06 06:01:48.059765 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-06 06:01:48.059776 | orchestrator | Monday 06 April 2026 06:01:37 +0000 (0:00:01.849) 0:08:58.076 ********** 2026-04-06 06:01:48.059795 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:01:48.059815 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:01:48.059833 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:01:48.059851 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:48.059897 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:48.059918 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:48.059935 | orchestrator | 2026-04-06 06:01:48.059952 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-06 06:01:48.059971 | orchestrator | Monday 06 April 2026 06:01:39 +0000 (0:00:01.842) 0:08:59.919 ********** 2026-04-06 06:01:48.059990 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:48.060009 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:48.060028 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:01:48.060048 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:48.060067 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:01:48.060087 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:01:48.060106 | orchestrator | 2026-04-06 06:01:48.060126 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-06 06:01:48.060140 | orchestrator | Monday 06 April 2026 06:01:42 +0000 (0:00:03.586) 0:09:03.505 ********** 2026-04-06 06:01:48.060153 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:48.060167 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:48.060180 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:01:48.060193 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:01:48.060218 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:01:48.060229 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:48.060240 | orchestrator | 2026-04-06 06:01:48.060251 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-06 06:01:48.060262 | orchestrator | Monday 06 April 2026 06:01:46 +0000 (0:00:03.729) 0:09:07.235 ********** 2026-04-06 06:01:48.060293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:01:48.060330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:01:48.060345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:01:48.060358 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:01:48.060370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:01:48.060384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:01:48.060415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:01:48.060455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:01:54.112420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:01:54.112498 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:01:54.112506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:01:54.112512 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:01:54.112517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:01:54.112537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:01:54.112541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:01:54.112545 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:54.112559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:01:54.112564 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:54.112577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:01:54.112582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:01:54.112586 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:54.112590 | orchestrator | 2026-04-06 06:01:54.112594 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-06 06:01:54.112599 | orchestrator | Monday 06 April 2026 06:01:50 +0000 (0:00:03.649) 0:09:10.885 ********** 2026-04-06 06:01:54.112604 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-06 06:01:54.112612 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-06 06:01:54.112616 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:01:54.112620 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-06 06:01:54.112624 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-06 06:01:54.112628 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:01:54.112631 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-06 06:01:54.112635 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-06 06:01:54.112639 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-06 06:01:54.112643 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:01:54.112647 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-06 06:01:54.112650 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-06 06:01:54.112654 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-06 06:01:54.112658 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:01:54.112662 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:01:54.112666 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-06 06:01:54.112670 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-06 06:01:54.112673 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:01:54.112677 | orchestrator | 2026-04-06 06:01:54.112681 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-06 06:01:54.112685 | orchestrator | Monday 06 April 2026 06:01:52 +0000 (0:00:01.955) 0:09:12.840 ********** 2026-04-06 06:01:54.112692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:01:54.112701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424542 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:01:55.424830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:02:00.362287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:02:00.362424 | orchestrator | 2026-04-06 06:02:00.362443 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-06 06:02:00.362460 | orchestrator | Monday 06 April 2026 06:01:56 +0000 (0:00:04.765) 0:09:17.606 ********** 2026-04-06 06:02:00.362481 | orchestrator | changed: [testbed-node-3] => { 2026-04-06 06:02:00.362502 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:02:00.362521 | orchestrator | } 2026-04-06 06:02:00.362539 | orchestrator | changed: [testbed-node-4] => { 2026-04-06 06:02:00.362558 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:02:00.362576 | orchestrator | } 2026-04-06 06:02:00.362594 | orchestrator | changed: [testbed-node-5] => { 2026-04-06 06:02:00.362612 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:02:00.362631 | orchestrator | } 2026-04-06 06:02:00.362651 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:02:00.362668 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:02:00.362686 | orchestrator | } 2026-04-06 06:02:00.362704 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:02:00.362723 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:02:00.362743 | orchestrator | } 2026-04-06 06:02:00.362756 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:02:00.362768 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:02:00.362778 | orchestrator | } 2026-04-06 06:02:00.362790 | orchestrator | 2026-04-06 06:02:00.362801 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:02:00.362812 | orchestrator | Monday 06 April 2026 06:01:58 +0000 (0:00:01.850) 0:09:19.456 ********** 2026-04-06 06:02:00.362830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:02:00.362847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:02:00.362908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:02:00.362936 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:02:00.362972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:02:00.362985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:02:00.362997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:02:00.363009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:02:00.363021 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:02:00.363038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:02:00.363057 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:02:00.363075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:04:57.620122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:04:57.620242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:04:57.620261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:04:57.620274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:04:57.620288 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:04:57.620317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:04:57.620352 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:04:57.620364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:04:57.620376 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:04:57.620388 | orchestrator | 2026-04-06 06:04:57.620417 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 06:04:57.620431 | orchestrator | Monday 06 April 2026 06:02:01 +0000 (0:00:03.216) 0:09:22.673 ********** 2026-04-06 06:04:57.620442 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:04:57.620453 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:04:57.620464 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:04:57.620475 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:04:57.620485 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:04:57.620496 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:04:57.620507 | orchestrator | 2026-04-06 06:04:57.620518 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:04:57.620529 | orchestrator | Monday 06 April 2026 06:02:03 +0000 (0:00:01.928) 0:09:24.601 ********** 2026-04-06 06:04:57.620540 | orchestrator | 2026-04-06 06:04:57.620554 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:04:57.620566 | orchestrator | Monday 06 April 2026 06:02:04 +0000 (0:00:00.541) 0:09:25.143 ********** 2026-04-06 06:04:57.620584 | orchestrator | 2026-04-06 06:04:57.620604 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:04:57.620622 | orchestrator | Monday 06 April 2026 06:02:05 +0000 (0:00:00.750) 0:09:25.894 ********** 2026-04-06 06:04:57.620641 | orchestrator | 2026-04-06 06:04:57.620659 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:04:57.620677 | orchestrator | Monday 06 April 2026 06:02:05 +0000 (0:00:00.542) 0:09:26.436 ********** 2026-04-06 06:04:57.620696 | orchestrator | 2026-04-06 06:04:57.620714 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:04:57.620732 | orchestrator | Monday 06 April 2026 06:02:06 +0000 (0:00:00.529) 0:09:26.966 ********** 2026-04-06 06:04:57.620750 | orchestrator | 2026-04-06 06:04:57.620770 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:04:57.620790 | orchestrator | Monday 06 April 2026 06:02:06 +0000 (0:00:00.540) 0:09:27.507 ********** 2026-04-06 06:04:57.620808 | orchestrator | 2026-04-06 06:04:57.620827 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-06 06:04:57.620845 | orchestrator | Monday 06 April 2026 06:02:07 +0000 (0:00:00.883) 0:09:28.391 ********** 2026-04-06 06:04:57.620864 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:04:57.620884 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:04:57.620931 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:04:57.620952 | orchestrator | 2026-04-06 06:04:57.620970 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-06 06:04:57.621005 | orchestrator | Monday 06 April 2026 06:02:27 +0000 (0:00:19.742) 0:09:48.133 ********** 2026-04-06 06:04:57.621024 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:04:57.621042 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:04:57.621061 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:04:57.621081 | orchestrator | 2026-04-06 06:04:57.621101 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-06 06:04:57.621120 | orchestrator | Monday 06 April 2026 06:02:49 +0000 (0:00:21.990) 0:10:10.124 ********** 2026-04-06 06:04:57.621140 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:04:57.621159 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:04:57.621178 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:04:57.621197 | orchestrator | 2026-04-06 06:04:57.621215 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-06 06:04:57.621234 | orchestrator | Monday 06 April 2026 06:03:16 +0000 (0:00:27.151) 0:10:37.276 ********** 2026-04-06 06:04:57.621253 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:04:57.621274 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:04:57.621293 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:04:57.621312 | orchestrator | 2026-04-06 06:04:57.621332 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-06 06:04:57.621350 | orchestrator | Monday 06 April 2026 06:04:01 +0000 (0:00:44.780) 0:11:22.056 ********** 2026-04-06 06:04:57.621369 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:04:57.621385 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-04-06 06:04:57.621404 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:04:57.621423 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:04:57.621441 | orchestrator | 2026-04-06 06:04:57.621480 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-06 06:04:57.621525 | orchestrator | Monday 06 April 2026 06:04:08 +0000 (0:00:07.070) 0:11:29.127 ********** 2026-04-06 06:04:57.621547 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:04:57.621559 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:04:57.621570 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:04:57.621589 | orchestrator | 2026-04-06 06:04:57.621607 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-06 06:04:57.621625 | orchestrator | Monday 06 April 2026 06:04:10 +0000 (0:00:01.778) 0:11:30.906 ********** 2026-04-06 06:04:57.621645 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:04:57.621665 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:04:57.621682 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:04:57.621702 | orchestrator | 2026-04-06 06:04:57.621721 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-06 06:04:57.621741 | orchestrator | Monday 06 April 2026 06:04:46 +0000 (0:00:36.695) 0:12:07.602 ********** 2026-04-06 06:04:57.621759 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:04:57.621771 | orchestrator | 2026-04-06 06:04:57.621782 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-06 06:04:57.621793 | orchestrator | Monday 06 April 2026 06:04:48 +0000 (0:00:01.488) 0:12:09.090 ********** 2026-04-06 06:04:57.621803 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:04:57.621814 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:04:57.621825 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:04:57.621836 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:04:57.621847 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:04:57.621864 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 06:04:57.621881 | orchestrator | 2026-04-06 06:04:57.621923 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-06 06:04:57.621960 | orchestrator | Monday 06 April 2026 06:04:57 +0000 (0:00:09.291) 0:12:18.382 ********** 2026-04-06 06:05:58.281595 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:05:58.281711 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:05:58.281727 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:05:58.281764 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:05:58.281775 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:05:58.281787 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:05:58.281798 | orchestrator | 2026-04-06 06:05:58.281810 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-06 06:05:58.281822 | orchestrator | Monday 06 April 2026 06:05:08 +0000 (0:00:11.179) 0:12:29.562 ********** 2026-04-06 06:05:58.281833 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:05:58.281844 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:05:58.281854 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:05:58.281865 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:05:58.281876 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:05:58.281887 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-04-06 06:05:58.281898 | orchestrator | 2026-04-06 06:05:58.281909 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-06 06:05:58.281920 | orchestrator | Monday 06 April 2026 06:05:14 +0000 (0:00:05.418) 0:12:34.981 ********** 2026-04-06 06:05:58.281931 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 06:05:58.281942 | orchestrator | 2026-04-06 06:05:58.281952 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-06 06:05:58.282092 | orchestrator | Monday 06 April 2026 06:05:28 +0000 (0:00:14.043) 0:12:49.024 ********** 2026-04-06 06:05:58.282111 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 06:05:58.282125 | orchestrator | 2026-04-06 06:05:58.282138 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-06 06:05:58.282151 | orchestrator | Monday 06 April 2026 06:05:31 +0000 (0:00:02.775) 0:12:51.800 ********** 2026-04-06 06:05:58.282164 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:05:58.282176 | orchestrator | 2026-04-06 06:05:58.282189 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-06 06:05:58.282203 | orchestrator | Monday 06 April 2026 06:05:33 +0000 (0:00:02.492) 0:12:54.292 ********** 2026-04-06 06:05:58.282216 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-06 06:05:58.282228 | orchestrator | 2026-04-06 06:05:58.282242 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-06 06:05:58.282255 | orchestrator | 2026-04-06 06:05:58.282267 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-06 06:05:58.282281 | orchestrator | Monday 06 April 2026 06:05:45 +0000 (0:00:11.675) 0:13:05.968 ********** 2026-04-06 06:05:58.282294 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:05:58.282307 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:05:58.282320 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:05:58.282332 | orchestrator | 2026-04-06 06:05:58.282346 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-06 06:05:58.282358 | orchestrator | 2026-04-06 06:05:58.282371 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-06 06:05:58.282384 | orchestrator | Monday 06 April 2026 06:05:47 +0000 (0:00:02.210) 0:13:08.178 ********** 2026-04-06 06:05:58.282397 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:05:58.282410 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:05:58.282423 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:05:58.282435 | orchestrator | 2026-04-06 06:05:58.282448 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-06 06:05:58.282473 | orchestrator | 2026-04-06 06:05:58.282484 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-06 06:05:58.282507 | orchestrator | Monday 06 April 2026 06:05:49 +0000 (0:00:01.982) 0:13:10.160 ********** 2026-04-06 06:05:58.282518 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-06 06:05:58.282530 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-06 06:05:58.282541 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-06 06:05:58.282562 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-06 06:05:58.282573 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-06 06:05:58.282597 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-06 06:05:58.282608 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:05:58.282619 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-06 06:05:58.282630 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-06 06:05:58.282640 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-06 06:05:58.282651 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-06 06:05:58.282662 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-06 06:05:58.282673 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-06 06:05:58.282684 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:05:58.282695 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-06 06:05:58.282706 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-06 06:05:58.282717 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-06 06:05:58.282728 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-06 06:05:58.282738 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-06 06:05:58.282749 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-06 06:05:58.282760 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:05:58.282783 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-06 06:05:58.282794 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-06 06:05:58.282805 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-06 06:05:58.282816 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-06 06:05:58.282855 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-06 06:05:58.282876 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-06 06:05:58.282892 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:05:58.282908 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-06 06:05:58.282925 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-06 06:05:58.282942 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-06 06:05:58.282959 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-06 06:05:58.283003 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-06 06:05:58.283021 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-06 06:05:58.283038 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:05:58.283057 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-06 06:05:58.283077 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-06 06:05:58.283096 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-06 06:05:58.283109 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-06 06:05:58.283120 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-06 06:05:58.283131 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-06 06:05:58.283142 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:05:58.283153 | orchestrator | 2026-04-06 06:05:58.283164 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-06 06:05:58.283174 | orchestrator | 2026-04-06 06:05:58.283185 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-06 06:05:58.283196 | orchestrator | Monday 06 April 2026 06:05:52 +0000 (0:00:02.728) 0:13:12.889 ********** 2026-04-06 06:05:58.283207 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-06 06:05:58.283218 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-06 06:05:58.283238 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:05:58.283249 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-06 06:05:58.283260 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-06 06:05:58.283271 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:05:58.283281 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-06 06:05:58.283292 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-06 06:05:58.283303 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:05:58.283314 | orchestrator | 2026-04-06 06:05:58.283324 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-06 06:05:58.283335 | orchestrator | 2026-04-06 06:05:58.283346 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-06 06:05:58.283357 | orchestrator | Monday 06 April 2026 06:05:54 +0000 (0:00:01.976) 0:13:14.866 ********** 2026-04-06 06:05:58.283368 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:05:58.283378 | orchestrator | 2026-04-06 06:05:58.283389 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-06 06:05:58.283400 | orchestrator | 2026-04-06 06:05:58.283411 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-06 06:05:58.283422 | orchestrator | Monday 06 April 2026 06:05:56 +0000 (0:00:02.024) 0:13:16.890 ********** 2026-04-06 06:05:58.283433 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:05:58.283443 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:05:58.283454 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:05:58.283465 | orchestrator | 2026-04-06 06:05:58.283476 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:05:58.283487 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 06:05:58.283501 | orchestrator | testbed-node-0 : ok=58  changed=25  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-06 06:05:58.283519 | orchestrator | testbed-node-1 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-06 06:05:58.283531 | orchestrator | testbed-node-2 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-06 06:05:58.283542 | orchestrator | testbed-node-3 : ok=49  changed=15  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-06 06:05:58.283553 | orchestrator | testbed-node-4 : ok=48  changed=14  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-06 06:05:58.283563 | orchestrator | testbed-node-5 : ok=43  changed=14  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-06 06:05:58.283574 | orchestrator | 2026-04-06 06:05:58.283585 | orchestrator | 2026-04-06 06:05:58.283596 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:05:58.283607 | orchestrator | Monday 06 April 2026 06:05:58 +0000 (0:00:02.145) 0:13:19.036 ********** 2026-04-06 06:05:58.283618 | orchestrator | =============================================================================== 2026-04-06 06:05:58.283629 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.78s 2026-04-06 06:05:58.283640 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 36.70s 2026-04-06 06:05:58.283651 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 35.87s 2026-04-06 06:05:58.283669 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.70s 2026-04-06 06:05:58.743546 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 27.15s 2026-04-06 06:05:58.743645 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 26.45s 2026-04-06 06:05:58.743684 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 21.99s 2026-04-06 06:05:58.743697 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 19.74s 2026-04-06 06:05:58.743708 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.60s 2026-04-06 06:05:58.743719 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.07s 2026-04-06 06:05:58.743730 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.04s 2026-04-06 06:05:58.743740 | orchestrator | nova : Restart nova-api container -------------------------------------- 13.68s 2026-04-06 06:05:58.743751 | orchestrator | nova-cell : Update cell ------------------------------------------------ 13.58s 2026-04-06 06:05:58.743763 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.24s 2026-04-06 06:05:58.743774 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 12.93s 2026-04-06 06:05:58.743785 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 12.57s 2026-04-06 06:05:58.743796 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.68s 2026-04-06 06:05:58.743806 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.18s 2026-04-06 06:05:58.743817 | orchestrator | nova : Restart nova-metadata container --------------------------------- 11.04s 2026-04-06 06:05:58.743828 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 10.10s 2026-04-06 06:05:58.944000 | orchestrator | + osism apply nova-update-cell-mappings 2026-04-06 06:06:10.377055 | orchestrator | 2026-04-06 06:06:10 | INFO  | Prepare task for execution of nova-update-cell-mappings. 2026-04-06 06:06:10.455476 | orchestrator | 2026-04-06 06:06:10 | INFO  | Task 09f5dec6-d77d-46a5-8934-86e44d1f804a (nova-update-cell-mappings) was prepared for execution. 2026-04-06 06:06:10.455554 | orchestrator | 2026-04-06 06:06:10 | INFO  | It takes a moment until task 09f5dec6-d77d-46a5-8934-86e44d1f804a (nova-update-cell-mappings) has been started and output is visible here. 2026-04-06 06:06:41.202740 | orchestrator | 2026-04-06 06:06:41.202837 | orchestrator | PLAY [Update Nova cell mappings] *********************************************** 2026-04-06 06:06:41.202848 | orchestrator | 2026-04-06 06:06:41.202857 | orchestrator | TASK [Get list of Nova cells] ************************************************** 2026-04-06 06:06:41.202864 | orchestrator | Monday 06 April 2026 06:06:15 +0000 (0:00:01.627) 0:00:01.627 ********** 2026-04-06 06:06:41.202872 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:06:41.202879 | orchestrator | 2026-04-06 06:06:41.202886 | orchestrator | TASK [Parse cell information] ************************************************** 2026-04-06 06:06:41.202893 | orchestrator | Monday 06 April 2026 06:06:29 +0000 (0:00:13.727) 0:00:15.355 ********** 2026-04-06 06:06:41.202900 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:06:41.202907 | orchestrator | 2026-04-06 06:06:41.202914 | orchestrator | TASK [Display cells to update] ************************************************* 2026-04-06 06:06:41.202920 | orchestrator | Monday 06 April 2026 06:06:30 +0000 (0:00:01.130) 0:00:16.485 ********** 2026-04-06 06:06:41.202927 | orchestrator | ok: [testbed-node-0] => { 2026-04-06 06:06:41.202935 | orchestrator |  "msg": "Cells to update: [{'name': '', 'uuid': 'd86093f4-83b6-4bb1-b30c-a603ccea57e1'}]" 2026-04-06 06:06:41.202942 | orchestrator | } 2026-04-06 06:06:41.202962 | orchestrator | 2026-04-06 06:06:41.202969 | orchestrator | TASK [Use specified cell UUID if provided] ************************************* 2026-04-06 06:06:41.202976 | orchestrator | Monday 06 April 2026 06:06:31 +0000 (0:00:01.172) 0:00:17.658 ********** 2026-04-06 06:06:41.202991 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:06:41.202998 | orchestrator | 2026-04-06 06:06:41.203055 | orchestrator | TASK [Abort if multiple cells found without specific UUID and abort_on_multiple is enabled] *** 2026-04-06 06:06:41.203064 | orchestrator | Monday 06 April 2026 06:06:32 +0000 (0:00:01.120) 0:00:18.778 ********** 2026-04-06 06:06:41.203070 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:06:41.203095 | orchestrator | 2026-04-06 06:06:41.203102 | orchestrator | TASK [Update Nova cell mappings] *********************************************** 2026-04-06 06:06:41.203109 | orchestrator | Monday 06 April 2026 06:06:33 +0000 (0:00:01.139) 0:00:19.918 ********** 2026-04-06 06:06:41.203116 | orchestrator | changed: [testbed-node-0] => (item=d86093f4-83b6-4bb1-b30c-a603ccea57e1) 2026-04-06 06:06:41.203122 | orchestrator | 2026-04-06 06:06:41.203129 | orchestrator | TASK [Display update results] ************************************************** 2026-04-06 06:06:41.203136 | orchestrator | Monday 06 April 2026 06:06:39 +0000 (0:00:05.555) 0:00:25.474 ********** 2026-04-06 06:06:41.203142 | orchestrator | ok: [testbed-node-0] => (item=d86093f4-83b6-4bb1-b30c-a603ccea57e1) => { 2026-04-06 06:06:41.203149 | orchestrator |  "msg": "Cell d86093f4-83b6-4bb1-b30c-a603ccea57e1 updated successfully" 2026-04-06 06:06:41.203156 | orchestrator | } 2026-04-06 06:06:41.203163 | orchestrator | 2026-04-06 06:06:41.203169 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:06:41.203177 | orchestrator | testbed-node-0 : ok=5  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 06:06:41.203185 | orchestrator | 2026-04-06 06:06:41.203192 | orchestrator | 2026-04-06 06:06:41.203199 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:06:41.203205 | orchestrator | Monday 06 April 2026 06:06:40 +0000 (0:00:01.535) 0:00:27.010 ********** 2026-04-06 06:06:41.203212 | orchestrator | =============================================================================== 2026-04-06 06:06:41.203219 | orchestrator | Get list of Nova cells ------------------------------------------------- 13.73s 2026-04-06 06:06:41.203226 | orchestrator | Update Nova cell mappings ----------------------------------------------- 5.56s 2026-04-06 06:06:41.203232 | orchestrator | Display update results -------------------------------------------------- 1.54s 2026-04-06 06:06:41.203239 | orchestrator | Display cells to update ------------------------------------------------- 1.17s 2026-04-06 06:06:41.203246 | orchestrator | Abort if multiple cells found without specific UUID and abort_on_multiple is enabled --- 1.14s 2026-04-06 06:06:41.203253 | orchestrator | Parse cell information -------------------------------------------------- 1.13s 2026-04-06 06:06:41.203259 | orchestrator | Use specified cell UUID if provided ------------------------------------- 1.12s 2026-04-06 06:06:41.394517 | orchestrator | + osism apply -a upgrade nova 2026-04-06 06:06:42.692765 | orchestrator | 2026-04-06 06:06:42 | INFO  | Prepare task for execution of nova. 2026-04-06 06:06:42.758722 | orchestrator | 2026-04-06 06:06:42 | INFO  | Task 14be19e1-c6c6-44cb-92be-b26feee4600c (nova) was prepared for execution. 2026-04-06 06:06:42.758822 | orchestrator | 2026-04-06 06:06:42 | INFO  | It takes a moment until task 14be19e1-c6c6-44cb-92be-b26feee4600c (nova) has been started and output is visible here. 2026-04-06 06:07:56.610457 | orchestrator | 2026-04-06 06:07:56.610559 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:07:56.610571 | orchestrator | 2026-04-06 06:07:56.610578 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-06 06:07:56.610586 | orchestrator | Monday 06 April 2026 06:06:48 +0000 (0:00:01.741) 0:00:01.741 ********** 2026-04-06 06:07:56.610594 | orchestrator | changed: [testbed-manager] 2026-04-06 06:07:56.610602 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:07:56.610609 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:07:56.610617 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:07:56.610624 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:07:56.610631 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:07:56.610637 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:07:56.610643 | orchestrator | 2026-04-06 06:07:56.610650 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:07:56.610656 | orchestrator | Monday 06 April 2026 06:06:51 +0000 (0:00:03.589) 0:00:05.330 ********** 2026-04-06 06:07:56.610683 | orchestrator | changed: [testbed-manager] 2026-04-06 06:07:56.610691 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:07:56.610697 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:07:56.610704 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:07:56.610710 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:07:56.610718 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:07:56.610724 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:07:56.610730 | orchestrator | 2026-04-06 06:07:56.610736 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:07:56.610743 | orchestrator | Monday 06 April 2026 06:06:53 +0000 (0:00:02.102) 0:00:07.433 ********** 2026-04-06 06:07:56.610750 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-06 06:07:56.610758 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-06 06:07:56.610765 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-06 06:07:56.610772 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-06 06:07:56.610779 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-06 06:07:56.610785 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-06 06:07:56.610791 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-06 06:07:56.610797 | orchestrator | 2026-04-06 06:07:56.610804 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-06 06:07:56.610810 | orchestrator | 2026-04-06 06:07:56.610816 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-06 06:07:56.610822 | orchestrator | Monday 06 April 2026 06:06:56 +0000 (0:00:02.995) 0:00:10.429 ********** 2026-04-06 06:07:56.610829 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:07:56.610848 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:07:56.610855 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:07:56.610862 | orchestrator | 2026-04-06 06:07:56.610868 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-06 06:07:56.610875 | orchestrator | Monday 06 April 2026 06:06:59 +0000 (0:00:02.487) 0:00:12.917 ********** 2026-04-06 06:07:56.610882 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:07:56.610889 | orchestrator | 2026-04-06 06:07:56.610896 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-06 06:07:56.610903 | orchestrator | Monday 06 April 2026 06:07:02 +0000 (0:00:02.896) 0:00:15.813 ********** 2026-04-06 06:07:56.610910 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:07:56.610917 | orchestrator | 2026-04-06 06:07:56.610924 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-06 06:07:56.610930 | orchestrator | Monday 06 April 2026 06:07:04 +0000 (0:00:01.999) 0:00:17.813 ********** 2026-04-06 06:07:56.610936 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:07:56.610942 | orchestrator | 2026-04-06 06:07:56.610949 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-06 06:07:56.610956 | orchestrator | Monday 06 April 2026 06:07:06 +0000 (0:00:02.148) 0:00:19.962 ********** 2026-04-06 06:07:56.610964 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:07:56.610971 | orchestrator | 2026-04-06 06:07:56.610978 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-06 06:07:56.610985 | orchestrator | Monday 06 April 2026 06:07:10 +0000 (0:00:03.993) 0:00:23.956 ********** 2026-04-06 06:07:56.610992 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:07:56.610998 | orchestrator | 2026-04-06 06:07:56.611005 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-06 06:07:56.611011 | orchestrator | 2026-04-06 06:07:56.611020 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-06 06:07:56.611028 | orchestrator | Monday 06 April 2026 06:07:29 +0000 (0:00:19.364) 0:00:43.321 ********** 2026-04-06 06:07:56.611035 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:07:56.611042 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:07:56.611048 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:07:56.611061 | orchestrator | 2026-04-06 06:07:56.611088 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-06 06:07:56.611097 | orchestrator | Monday 06 April 2026 06:07:30 +0000 (0:00:01.320) 0:00:44.641 ********** 2026-04-06 06:07:56.611104 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:07:56.611112 | orchestrator | 2026-04-06 06:07:56.611120 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-06 06:07:56.611128 | orchestrator | Monday 06 April 2026 06:07:32 +0000 (0:00:01.688) 0:00:46.330 ********** 2026-04-06 06:07:56.611136 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:07:56.611143 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:07:56.611151 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:07:56.611159 | orchestrator | 2026-04-06 06:07:56.611168 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-06 06:07:56.611176 | orchestrator | Monday 06 April 2026 06:07:34 +0000 (0:00:01.648) 0:00:47.978 ********** 2026-04-06 06:07:56.611183 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:07:56.611191 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:07:56.611199 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:07:56.611206 | orchestrator | 2026-04-06 06:07:56.611228 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-06 06:07:56.611236 | orchestrator | Monday 06 April 2026 06:07:36 +0000 (0:00:01.963) 0:00:49.942 ********** 2026-04-06 06:07:56.611243 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:07:56.611250 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:07:56.611258 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:07:56.611266 | orchestrator | 2026-04-06 06:07:56.611273 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-06 06:07:56.611281 | orchestrator | Monday 06 April 2026 06:07:39 +0000 (0:00:03.505) 0:00:53.447 ********** 2026-04-06 06:07:56.611289 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:07:56.611296 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:07:56.611304 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:07:56.611311 | orchestrator | 2026-04-06 06:07:56.611319 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-06 06:07:56.611328 | orchestrator | 2026-04-06 06:07:56.611335 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-06 06:07:56.611343 | orchestrator | Monday 06 April 2026 06:07:53 +0000 (0:00:13.633) 0:01:07.081 ********** 2026-04-06 06:07:56.611351 | orchestrator | included: /ansible/roles/nova/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:07:56.611359 | orchestrator | 2026-04-06 06:07:56.611367 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-06 06:07:56.611375 | orchestrator | Monday 06 April 2026 06:07:55 +0000 (0:00:01.950) 0:01:09.031 ********** 2026-04-06 06:07:56.611392 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:07:56.611402 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:07:56.611422 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:08.055039 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:08.055351 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:08.055381 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:08.055418 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:08.055451 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:08.055465 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:08.055477 | orchestrator | 2026-04-06 06:08:08.055490 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-06 06:08:08.055503 | orchestrator | Monday 06 April 2026 06:07:58 +0000 (0:00:03.105) 0:01:12.136 ********** 2026-04-06 06:08:08.055514 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:08:08.055527 | orchestrator | 2026-04-06 06:08:08.055538 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-06 06:08:08.055553 | orchestrator | Monday 06 April 2026 06:07:59 +0000 (0:00:01.145) 0:01:13.282 ********** 2026-04-06 06:08:08.055571 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:08:08.055590 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:08:08.055607 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:08:08.055620 | orchestrator | 2026-04-06 06:08:08.055633 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-06 06:08:08.055657 | orchestrator | Monday 06 April 2026 06:08:01 +0000 (0:00:01.585) 0:01:14.868 ********** 2026-04-06 06:08:08.055671 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:08:08.055685 | orchestrator | 2026-04-06 06:08:08.055704 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-06 06:08:08.055718 | orchestrator | Monday 06 April 2026 06:08:03 +0000 (0:00:02.132) 0:01:17.001 ********** 2026-04-06 06:08:08.055731 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:08:08.055745 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:08:08.055759 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:08:08.055773 | orchestrator | 2026-04-06 06:08:08.055786 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-06 06:08:08.055800 | orchestrator | Monday 06 April 2026 06:08:04 +0000 (0:00:01.395) 0:01:18.396 ********** 2026-04-06 06:08:08.055860 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:08:08.055875 | orchestrator | 2026-04-06 06:08:08.055889 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-06 06:08:08.055903 | orchestrator | Monday 06 April 2026 06:08:06 +0000 (0:00:01.885) 0:01:20.282 ********** 2026-04-06 06:08:08.055919 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:08.055946 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:11.367621 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:11.367805 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:11.367838 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:11.367884 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:11.367899 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:11.367921 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:11.367933 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:11.367945 | orchestrator | 2026-04-06 06:08:11.367958 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-06 06:08:11.367970 | orchestrator | Monday 06 April 2026 06:08:10 +0000 (0:00:04.251) 0:01:24.533 ********** 2026-04-06 06:08:11.367984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:11.368045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:13.356230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:08:13.356332 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:08:13.356364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:13.356378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:13.356389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:08:13.356398 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:08:13.356425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:13.356459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:13.356470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:08:13.356479 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:08:13.356488 | orchestrator | 2026-04-06 06:08:13.356498 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-06 06:08:13.356508 | orchestrator | Monday 06 April 2026 06:08:12 +0000 (0:00:01.859) 0:01:26.393 ********** 2026-04-06 06:08:13.356518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:13.356535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:16.646767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:08:16.646870 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:08:16.646906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:16.646923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:16.646936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:08:16.646969 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:08:16.647001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:16.647021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:16.647034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:08:16.647045 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:08:16.647057 | orchestrator | 2026-04-06 06:08:16.647069 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-06 06:08:16.647081 | orchestrator | Monday 06 April 2026 06:08:15 +0000 (0:00:02.497) 0:01:28.890 ********** 2026-04-06 06:08:16.647125 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:16.647153 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:22.830389 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:22.830500 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:22.830520 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:22.830573 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:22.830595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:22.830609 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:22.830621 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:22.830633 | orchestrator | 2026-04-06 06:08:22.830646 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-06 06:08:22.830658 | orchestrator | Monday 06 April 2026 06:08:19 +0000 (0:00:04.299) 0:01:33.189 ********** 2026-04-06 06:08:22.830671 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:22.830701 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:29.835676 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:29.835811 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:29.835868 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:29.835902 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:08:29.835923 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:29.835937 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:29.835949 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:08:29.835969 | orchestrator | 2026-04-06 06:08:29.835983 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-06 06:08:29.835995 | orchestrator | Monday 06 April 2026 06:08:29 +0000 (0:00:09.685) 0:01:42.875 ********** 2026-04-06 06:08:29.836008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:29.836033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:41.663333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:08:41.663429 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:08:41.663442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:41.663490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:41.663500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:08:41.663507 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:08:41.663542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:41.663551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:08:41.663564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:08:41.663571 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:08:41.663578 | orchestrator | 2026-04-06 06:08:41.663586 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-06 06:08:41.663594 | orchestrator | Monday 06 April 2026 06:08:31 +0000 (0:00:02.135) 0:01:45.010 ********** 2026-04-06 06:08:41.663601 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:08:41.663608 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:08:41.663614 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:08:41.663621 | orchestrator | 2026-04-06 06:08:41.663628 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-06 06:08:41.663635 | orchestrator | Monday 06 April 2026 06:08:33 +0000 (0:00:02.048) 0:01:47.059 ********** 2026-04-06 06:08:41.663641 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:08:41.663648 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:08:41.663655 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:08:41.663661 | orchestrator | 2026-04-06 06:08:41.663668 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-06 06:08:41.663675 | orchestrator | Monday 06 April 2026 06:08:35 +0000 (0:00:01.707) 0:01:48.767 ********** 2026-04-06 06:08:41.663682 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-06 06:08:41.663690 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-06 06:08:41.663696 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:08:41.663703 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-06 06:08:41.663710 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-06 06:08:41.663716 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:08:41.663723 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-06 06:08:41.663730 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-06 06:08:41.663737 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:08:41.663743 | orchestrator | 2026-04-06 06:08:41.663750 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-06 06:08:41.663757 | orchestrator | Monday 06 April 2026 06:08:36 +0000 (0:00:01.382) 0:01:50.149 ********** 2026-04-06 06:08:41.663764 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-06 06:08:41.663773 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-06 06:08:41.663780 | orchestrator | 2026-04-06 06:08:41.663787 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-06 06:08:41.663794 | orchestrator | Monday 06 April 2026 06:08:39 +0000 (0:00:03.042) 0:01:53.192 ********** 2026-04-06 06:08:41.663800 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:08:41.663807 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:08:41.663818 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:08:41.663825 | orchestrator | 2026-04-06 06:09:07.313296 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-06 06:09:07.313466 | orchestrator | Monday 06 April 2026 06:08:42 +0000 (0:00:02.969) 0:01:56.162 ********** 2026-04-06 06:09:07.313494 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:09:07.313516 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:09:07.313539 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:09:07.313563 | orchestrator | 2026-04-06 06:09:07.313582 | orchestrator | TASK [nova : Run Nova upgrade checks] ****************************************** 2026-04-06 06:09:07.313599 | orchestrator | Monday 06 April 2026 06:08:46 +0000 (0:00:03.644) 0:01:59.806 ********** 2026-04-06 06:09:07.313616 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:09:07.313641 | orchestrator | 2026-04-06 06:09:07.313664 | orchestrator | TASK [nova : Upgrade status check result] ************************************** 2026-04-06 06:09:07.313681 | orchestrator | Monday 06 April 2026 06:09:04 +0000 (0:00:18.811) 0:02:18.617 ********** 2026-04-06 06:09:07.313699 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:09:07.313716 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:09:07.313733 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:09:07.313750 | orchestrator | 2026-04-06 06:09:07.313768 | orchestrator | TASK [nova : Stopping top level nova services] ********************************* 2026-04-06 06:09:07.313786 | orchestrator | Monday 06 April 2026 06:09:06 +0000 (0:00:01.437) 0:02:20.055 ********** 2026-04-06 06:09:07.313808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:07.313835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:07.313854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:09:07.313890 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:09:07.313953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:07.313978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:07.314109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:09:07.314169 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:09:07.314189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:07.314252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:12.539759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:09:12.539895 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:09:12.539920 | orchestrator | 2026-04-06 06:09:12.539938 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-06 06:09:12.539956 | orchestrator | Monday 06 April 2026 06:09:08 +0000 (0:00:02.411) 0:02:22.466 ********** 2026-04-06 06:09:12.539975 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:09:12.539995 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:09:12.540059 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:09:12.540102 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:09:12.540121 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:09:12.540178 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:09:12.540219 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:09:12.540252 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:09:16.038659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:09:16.038760 | orchestrator | 2026-04-06 06:09:16.038777 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-06 06:09:16.038790 | orchestrator | Monday 06 April 2026 06:09:13 +0000 (0:00:04.882) 0:02:27.349 ********** 2026-04-06 06:09:16.038803 | orchestrator | ok: [testbed-node-0] => { 2026-04-06 06:09:16.038815 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:09:16.038826 | orchestrator | } 2026-04-06 06:09:16.038837 | orchestrator | ok: [testbed-node-1] => { 2026-04-06 06:09:16.038848 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:09:16.038859 | orchestrator | } 2026-04-06 06:09:16.038869 | orchestrator | ok: [testbed-node-2] => { 2026-04-06 06:09:16.038880 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:09:16.038891 | orchestrator | } 2026-04-06 06:09:16.038902 | orchestrator | 2026-04-06 06:09:16.038913 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:09:16.038924 | orchestrator | Monday 06 April 2026 06:09:15 +0000 (0:00:01.428) 0:02:28.777 ********** 2026-04-06 06:09:16.038938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:16.038980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:16.039009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:09:16.039022 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:09:16.039054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:16.039067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:16.039088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:09:16.039099 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:09:16.039116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:16.039163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:09:59.595526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:09:59.595637 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:09:59.595649 | orchestrator | 2026-04-06 06:09:59.595658 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-06 06:09:59.595692 | orchestrator | Monday 06 April 2026 06:09:17 +0000 (0:00:02.191) 0:02:30.969 ********** 2026-04-06 06:09:59.595700 | orchestrator | 2026-04-06 06:09:59.595707 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-06 06:09:59.595715 | orchestrator | Monday 06 April 2026 06:09:17 +0000 (0:00:00.530) 0:02:31.499 ********** 2026-04-06 06:09:59.595728 | orchestrator | 2026-04-06 06:09:59.595735 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-06 06:09:59.595743 | orchestrator | Monday 06 April 2026 06:09:18 +0000 (0:00:00.522) 0:02:32.022 ********** 2026-04-06 06:09:59.595750 | orchestrator | 2026-04-06 06:09:59.595758 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-06 06:09:59.595765 | orchestrator | 2026-04-06 06:09:59.595772 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 06:09:59.595779 | orchestrator | Monday 06 April 2026 06:09:20 +0000 (0:00:01.673) 0:02:33.695 ********** 2026-04-06 06:09:59.595787 | orchestrator | included: /ansible/roles/nova-cell/tasks/upgrade.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:09:59.595795 | orchestrator | 2026-04-06 06:09:59.595802 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-06 06:09:59.595809 | orchestrator | Monday 06 April 2026 06:09:22 +0000 (0:00:02.632) 0:02:36.328 ********** 2026-04-06 06:09:59.595816 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:09:59.595823 | orchestrator | 2026-04-06 06:09:59.595830 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-06 06:09:59.595837 | orchestrator | Monday 06 April 2026 06:09:27 +0000 (0:00:04.447) 0:02:40.775 ********** 2026-04-06 06:09:59.595844 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:09:59.595852 | orchestrator | 2026-04-06 06:09:59.595859 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-06 06:09:59.595865 | orchestrator | Monday 06 April 2026 06:09:29 +0000 (0:00:02.300) 0:02:43.076 ********** 2026-04-06 06:09:59.595872 | orchestrator | included: service-image-info for testbed-node-3 2026-04-06 06:09:59.595880 | orchestrator | 2026-04-06 06:09:59.595886 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-06 06:09:59.595893 | orchestrator | Monday 06 April 2026 06:09:31 +0000 (0:00:02.084) 0:02:45.160 ********** 2026-04-06 06:09:59.595900 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:09:59.595907 | orchestrator | 2026-04-06 06:09:59.595915 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-06 06:09:59.595922 | orchestrator | Monday 06 April 2026 06:09:35 +0000 (0:00:04.432) 0:02:49.593 ********** 2026-04-06 06:09:59.595929 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:09:59.595936 | orchestrator | 2026-04-06 06:09:59.595954 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-06 06:09:59.595961 | orchestrator | Monday 06 April 2026 06:09:38 +0000 (0:00:03.060) 0:02:52.653 ********** 2026-04-06 06:09:59.595968 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:09:59.595975 | orchestrator | 2026-04-06 06:09:59.595982 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-06 06:09:59.595989 | orchestrator | Monday 06 April 2026 06:09:41 +0000 (0:00:03.016) 0:02:55.670 ********** 2026-04-06 06:09:59.595995 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:09:59.596003 | orchestrator | 2026-04-06 06:09:59.596009 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-06 06:09:59.596016 | orchestrator | Monday 06 April 2026 06:09:45 +0000 (0:00:03.054) 0:02:58.724 ********** 2026-04-06 06:09:59.596024 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:09:59.596030 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:09:59.596037 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:09:59.596045 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:09:59.596052 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:09:59.596058 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:09:59.596070 | orchestrator | 2026-04-06 06:09:59.596077 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-06 06:09:59.596088 | orchestrator | Monday 06 April 2026 06:09:50 +0000 (0:00:05.377) 0:03:04.101 ********** 2026-04-06 06:09:59.596098 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:09:59.596109 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:09:59.596119 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:09:59.596129 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:09:59.596140 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:09:59.596150 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:09:59.596174 | orchestrator | 2026-04-06 06:09:59.596184 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-06 06:09:59.596191 | orchestrator | Monday 06 April 2026 06:09:55 +0000 (0:00:04.954) 0:03:09.056 ********** 2026-04-06 06:09:59.596197 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:09:59.596203 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:09:59.596210 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:09:59.596219 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:09:59.596229 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:09:59.596254 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:09:59.596263 | orchestrator | 2026-04-06 06:09:59.596274 | orchestrator | TASK [nova-cell : Stopping nova cell services] ********************************* 2026-04-06 06:09:59.596285 | orchestrator | Monday 06 April 2026 06:09:58 +0000 (0:00:03.064) 0:03:12.121 ********** 2026-04-06 06:09:59.596297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:09:59.596309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:09:59.596321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:09:59.596331 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:09:59.596346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:09:59.596364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:09:59.596380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:10:10.212366 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:10:10.212481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:10:10.212501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:10:10.212537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:10:10.212570 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:10:10.212582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:10:10.212595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:10:10.212624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:10:10.212637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:10:10.212648 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:10:10.212659 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:10:10.212671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:10:10.212694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:10:10.212706 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:10:10.212718 | orchestrator | 2026-04-06 06:10:10.212729 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-06 06:10:10.212741 | orchestrator | Monday 06 April 2026 06:10:01 +0000 (0:00:03.403) 0:03:15.524 ********** 2026-04-06 06:10:10.212752 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:10:10.212763 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:10:10.212774 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:10:10.212785 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 06:10:10.212796 | orchestrator | 2026-04-06 06:10:10.212807 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-06 06:10:10.212818 | orchestrator | Monday 06 April 2026 06:10:04 +0000 (0:00:02.339) 0:03:17.864 ********** 2026-04-06 06:10:10.212829 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-06 06:10:10.212841 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-06 06:10:10.212851 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-06 06:10:10.212862 | orchestrator | 2026-04-06 06:10:10.212873 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-06 06:10:10.212884 | orchestrator | Monday 06 April 2026 06:10:06 +0000 (0:00:01.992) 0:03:19.856 ********** 2026-04-06 06:10:10.212895 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-06 06:10:10.212908 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-06 06:10:10.212921 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-06 06:10:10.212934 | orchestrator | 2026-04-06 06:10:10.212947 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-06 06:10:10.212960 | orchestrator | Monday 06 April 2026 06:10:08 +0000 (0:00:02.148) 0:03:22.005 ********** 2026-04-06 06:10:10.212973 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-06 06:10:10.212987 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:10:10.213000 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-06 06:10:10.213013 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:10:10.213025 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-06 06:10:10.213038 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:10:10.213051 | orchestrator | 2026-04-06 06:10:10.213064 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-06 06:10:10.213077 | orchestrator | Monday 06 April 2026 06:10:09 +0000 (0:00:01.432) 0:03:23.438 ********** 2026-04-06 06:10:10.213090 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 06:10:10.213104 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 06:10:10.213117 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:10:10.213137 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 06:10:18.554750 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 06:10:18.554856 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-06 06:10:18.554872 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-06 06:10:18.554904 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:10:18.554915 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-06 06:10:18.554924 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-06 06:10:18.554932 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-06 06:10:18.554941 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:10:18.554949 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-06 06:10:18.554958 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-06 06:10:18.554967 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-06 06:10:18.554976 | orchestrator | 2026-04-06 06:10:18.554985 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-06 06:10:18.554993 | orchestrator | Monday 06 April 2026 06:10:11 +0000 (0:00:02.001) 0:03:25.440 ********** 2026-04-06 06:10:18.555001 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:10:18.555009 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:10:18.555017 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:10:18.555025 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:10:18.555035 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:10:18.555044 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:10:18.555052 | orchestrator | 2026-04-06 06:10:18.555061 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-06 06:10:18.555069 | orchestrator | Monday 06 April 2026 06:10:14 +0000 (0:00:02.339) 0:03:27.780 ********** 2026-04-06 06:10:18.555078 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:10:18.555087 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:10:18.555095 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:10:18.555105 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:10:18.555114 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:10:18.555122 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:10:18.555130 | orchestrator | 2026-04-06 06:10:18.555139 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-06 06:10:18.555148 | orchestrator | Monday 06 April 2026 06:10:16 +0000 (0:00:02.488) 0:03:30.268 ********** 2026-04-06 06:10:18.555223 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:10:18.555241 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:10:18.555251 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:10:18.555291 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:10:18.555304 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:10:18.555321 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:18.555331 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:10:18.555342 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:10:18.555366 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:10:24.981949 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:10:24.982123 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:24.982166 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:24.982231 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:24.982244 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:24.982278 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:24.982290 | orchestrator | 2026-04-06 06:10:24.982324 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 06:10:24.982339 | orchestrator | Monday 06 April 2026 06:10:20 +0000 (0:00:03.784) 0:03:34.052 ********** 2026-04-06 06:10:24.982352 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:10:24.982365 | orchestrator | 2026-04-06 06:10:24.982377 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-06 06:10:24.982388 | orchestrator | Monday 06 April 2026 06:10:22 +0000 (0:00:02.274) 0:03:36.327 ********** 2026-04-06 06:10:24.982401 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:10:24.982424 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:10:24.982437 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:10:24.982459 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:10:24.982482 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570246 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570327 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570352 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570360 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570388 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570396 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570418 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570427 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570438 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570445 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:10:28.570458 | orchestrator | 2026-04-06 06:10:28.570466 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-06 06:10:28.570474 | orchestrator | Monday 06 April 2026 06:10:27 +0000 (0:00:04.662) 0:03:40.989 ********** 2026-04-06 06:10:28.570483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:10:28.570496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:10:29.446083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:10:29.446263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:10:29.446308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:10:29.446321 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:10:29.446335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:10:29.446400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:10:29.446432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:10:29.446444 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:10:29.446456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:10:29.446467 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:10:29.446485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:10:29.446506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:10:29.446518 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:10:29.446529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:10:29.446542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:10:29.446564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:10:32.499779 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:10:32.499888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:10:32.499907 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:10:32.499920 | orchestrator | 2026-04-06 06:10:32.499933 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-06 06:10:32.499968 | orchestrator | Monday 06 April 2026 06:10:30 +0000 (0:00:03.347) 0:03:44.336 ********** 2026-04-06 06:10:32.499996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:10:32.500010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:10:32.500022 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:10:32.500034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:10:32.500064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:10:32.500091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:10:32.500104 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:10:32.500115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:10:32.500127 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:10:32.500138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:10:32.500150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:10:32.500161 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:10:32.500181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:11:02.076296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:11:02.076449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:11:02.076477 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:11:02.076500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:11:02.076520 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:11:02.076541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:11:02.076563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:11:02.076584 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:11:02.076605 | orchestrator | 2026-04-06 06:11:02.076627 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-06 06:11:02.076650 | orchestrator | Monday 06 April 2026 06:10:34 +0000 (0:00:03.742) 0:03:48.079 ********** 2026-04-06 06:11:02.076670 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:11:02.076690 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:11:02.076710 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:11:02.076734 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 06:11:02.076790 | orchestrator | 2026-04-06 06:11:02.076813 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-06 06:11:02.076834 | orchestrator | Monday 06 April 2026 06:10:36 +0000 (0:00:02.328) 0:03:50.408 ********** 2026-04-06 06:11:02.076854 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:11:02.076875 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:11:02.076897 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:11:02.076918 | orchestrator | 2026-04-06 06:11:02.076941 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-06 06:11:02.076989 | orchestrator | Monday 06 April 2026 06:10:38 +0000 (0:00:01.975) 0:03:52.383 ********** 2026-04-06 06:11:02.077010 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:11:02.077030 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:11:02.077050 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:11:02.077072 | orchestrator | 2026-04-06 06:11:02.077094 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-06 06:11:02.077113 | orchestrator | Monday 06 April 2026 06:10:40 +0000 (0:00:02.042) 0:03:54.426 ********** 2026-04-06 06:11:02.077133 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:11:02.077152 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:11:02.077169 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:11:02.077181 | orchestrator | 2026-04-06 06:11:02.077192 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-06 06:11:02.077257 | orchestrator | Monday 06 April 2026 06:10:42 +0000 (0:00:01.749) 0:03:56.175 ********** 2026-04-06 06:11:02.077294 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:11:02.077310 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:11:02.077321 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:11:02.077332 | orchestrator | 2026-04-06 06:11:02.077343 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-06 06:11:02.077354 | orchestrator | Monday 06 April 2026 06:10:44 +0000 (0:00:01.656) 0:03:57.832 ********** 2026-04-06 06:11:02.077365 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-06 06:11:02.077376 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-06 06:11:02.077387 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-06 06:11:02.077397 | orchestrator | 2026-04-06 06:11:02.077409 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-06 06:11:02.077420 | orchestrator | Monday 06 April 2026 06:10:46 +0000 (0:00:02.290) 0:04:00.123 ********** 2026-04-06 06:11:02.077431 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-06 06:11:02.077442 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-06 06:11:02.077453 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-06 06:11:02.077463 | orchestrator | 2026-04-06 06:11:02.077474 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-06 06:11:02.077485 | orchestrator | Monday 06 April 2026 06:10:48 +0000 (0:00:02.234) 0:04:02.357 ********** 2026-04-06 06:11:02.077496 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-06 06:11:02.077506 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-06 06:11:02.077517 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-06 06:11:02.077528 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-06 06:11:02.077538 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-06 06:11:02.077549 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-06 06:11:02.077560 | orchestrator | 2026-04-06 06:11:02.077570 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-06 06:11:02.077581 | orchestrator | Monday 06 April 2026 06:10:53 +0000 (0:00:04.958) 0:04:07.315 ********** 2026-04-06 06:11:02.077592 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:11:02.077603 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:11:02.077613 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:11:02.077624 | orchestrator | 2026-04-06 06:11:02.077648 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-06 06:11:02.077659 | orchestrator | Monday 06 April 2026 06:10:55 +0000 (0:00:01.373) 0:04:08.689 ********** 2026-04-06 06:11:02.077670 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:11:02.077681 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:11:02.077691 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:11:02.077702 | orchestrator | 2026-04-06 06:11:02.077713 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-06 06:11:02.077724 | orchestrator | Monday 06 April 2026 06:10:56 +0000 (0:00:01.339) 0:04:10.028 ********** 2026-04-06 06:11:02.077734 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:11:02.077745 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:11:02.077756 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:11:02.077766 | orchestrator | 2026-04-06 06:11:02.077777 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-06 06:11:02.077788 | orchestrator | Monday 06 April 2026 06:10:58 +0000 (0:00:02.515) 0:04:12.544 ********** 2026-04-06 06:11:02.077800 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-06 06:11:02.077812 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-06 06:11:02.077823 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-06 06:11:02.077834 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-06 06:11:02.077845 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-06 06:11:02.077856 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-06 06:11:02.077867 | orchestrator | 2026-04-06 06:11:02.077878 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-06 06:11:23.537889 | orchestrator | Monday 06 April 2026 06:11:03 +0000 (0:00:04.238) 0:04:16.782 ********** 2026-04-06 06:11:23.538005 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-06 06:11:23.538087 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-06 06:11:23.538100 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-06 06:11:23.538111 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-06 06:11:23.538122 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:11:23.538134 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-06 06:11:23.538145 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:11:23.538156 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-06 06:11:23.538168 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:11:23.538180 | orchestrator | 2026-04-06 06:11:23.538192 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-06 06:11:23.538246 | orchestrator | Monday 06 April 2026 06:11:07 +0000 (0:00:04.258) 0:04:21.041 ********** 2026-04-06 06:11:23.538259 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:11:23.538270 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:11:23.538281 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:11:23.538292 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 06:11:23.538303 | orchestrator | 2026-04-06 06:11:23.538315 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-06 06:11:23.538348 | orchestrator | Monday 06 April 2026 06:11:10 +0000 (0:00:03.379) 0:04:24.421 ********** 2026-04-06 06:11:23.538359 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:11:23.538370 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:11:23.538380 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:11:23.538391 | orchestrator | 2026-04-06 06:11:23.538402 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-06 06:11:23.538413 | orchestrator | Monday 06 April 2026 06:11:12 +0000 (0:00:02.041) 0:04:26.462 ********** 2026-04-06 06:11:23.538426 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:11:23.538439 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:11:23.538451 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:11:23.538464 | orchestrator | 2026-04-06 06:11:23.538476 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-06 06:11:23.538489 | orchestrator | Monday 06 April 2026 06:11:14 +0000 (0:00:01.426) 0:04:27.889 ********** 2026-04-06 06:11:23.538502 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:11:23.538514 | orchestrator | 2026-04-06 06:11:23.538527 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-06 06:11:23.538540 | orchestrator | Monday 06 April 2026 06:11:15 +0000 (0:00:01.267) 0:04:29.156 ********** 2026-04-06 06:11:23.538552 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:11:23.538564 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:11:23.538576 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:11:23.538589 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:11:23.538601 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:11:23.538615 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:11:23.538628 | orchestrator | 2026-04-06 06:11:23.538641 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-06 06:11:23.538654 | orchestrator | Monday 06 April 2026 06:11:17 +0000 (0:00:01.855) 0:04:31.011 ********** 2026-04-06 06:11:23.538667 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:11:23.538680 | orchestrator | 2026-04-06 06:11:23.538692 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-06 06:11:23.538704 | orchestrator | Monday 06 April 2026 06:11:19 +0000 (0:00:01.754) 0:04:32.766 ********** 2026-04-06 06:11:23.538717 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:11:23.538729 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:11:23.538742 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:11:23.538755 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:11:23.538768 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:11:23.538780 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:11:23.538791 | orchestrator | 2026-04-06 06:11:23.538802 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-06 06:11:23.538812 | orchestrator | Monday 06 April 2026 06:11:21 +0000 (0:00:01.941) 0:04:34.708 ********** 2026-04-06 06:11:23.538827 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:11:23.538863 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:11:23.538889 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:11:23.538903 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:11:23.538915 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:11:23.538927 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:11:23.538939 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:11:23.538964 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:11:27.416392 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:11:27.416497 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:27.416513 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:27.416526 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:27.416538 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:27.416573 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:27.416610 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:27.416625 | orchestrator | 2026-04-06 06:11:27.416638 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-06 06:11:27.416650 | orchestrator | Monday 06 April 2026 06:11:25 +0000 (0:00:04.723) 0:04:39.431 ********** 2026-04-06 06:11:27.416663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:11:27.416675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:11:27.416688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:11:27.416708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:11:27.416733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:11:41.103578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:11:41.103690 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:41.103706 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:41.103737 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:41.103749 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:11:41.103786 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:11:41.103798 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:11:41.103808 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:41.103818 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:41.103835 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:11:41.103846 | orchestrator | 2026-04-06 06:11:41.103857 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-06 06:11:41.103869 | orchestrator | Monday 06 April 2026 06:11:34 +0000 (0:00:09.220) 0:04:48.651 ********** 2026-04-06 06:11:41.103879 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:11:41.103889 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:11:41.103899 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:11:41.103908 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:11:41.103918 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:11:41.103928 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:11:41.103937 | orchestrator | 2026-04-06 06:11:41.103947 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-06 06:11:41.103957 | orchestrator | Monday 06 April 2026 06:11:38 +0000 (0:00:03.471) 0:04:52.123 ********** 2026-04-06 06:11:41.103967 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-06 06:11:41.103976 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-06 06:11:41.103986 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-06 06:11:41.103996 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-06 06:11:41.104006 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-06 06:11:41.104019 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-06 06:11:41.104029 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:11:41.104039 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-06 06:11:41.104049 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:11:41.104058 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-06 06:11:41.104068 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:11:41.104084 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-06 06:12:12.359389 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-06 06:12:12.359533 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-06 06:12:12.359552 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-06 06:12:12.359565 | orchestrator | 2026-04-06 06:12:12.359577 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-06 06:12:12.359589 | orchestrator | Monday 06 April 2026 06:11:43 +0000 (0:00:05.346) 0:04:57.470 ********** 2026-04-06 06:12:12.359600 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:12:12.359613 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:12:12.359624 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:12:12.359643 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:12:12.359664 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:12:12.359682 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:12:12.359693 | orchestrator | 2026-04-06 06:12:12.359705 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-06 06:12:12.359741 | orchestrator | Monday 06 April 2026 06:11:45 +0000 (0:00:02.114) 0:04:59.584 ********** 2026-04-06 06:12:12.359753 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-06 06:12:12.359765 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-06 06:12:12.359791 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-06 06:12:12.359803 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-06 06:12:12.359815 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-06 06:12:12.359825 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-06 06:12:12.359836 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-06 06:12:12.359847 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-06 06:12:12.359858 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-06 06:12:12.359871 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-06 06:12:12.359884 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:12:12.359897 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:12:12.359910 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-06 06:12:12.359929 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:12:12.359950 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-06 06:12:12.359968 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:12:12.359982 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:12:12.359994 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:12:12.360007 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:12:12.360020 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:12:12.360033 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-06 06:12:12.360045 | orchestrator | 2026-04-06 06:12:12.360059 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-06 06:12:12.360072 | orchestrator | Monday 06 April 2026 06:11:53 +0000 (0:00:07.416) 0:05:07.001 ********** 2026-04-06 06:12:12.360085 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 06:12:12.360099 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 06:12:12.360111 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-06 06:12:12.360137 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-06 06:12:12.360148 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 06:12:12.360159 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 06:12:12.360169 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-06 06:12:12.360180 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-06 06:12:12.360212 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-06 06:12:12.360247 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 06:12:12.360332 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 06:12:12.360343 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-06 06:12:12.360354 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-06 06:12:12.360364 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:12:12.360375 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 06:12:12.360386 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-06 06:12:12.360397 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:12:12.360407 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 06:12:12.360427 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-06 06:12:12.360446 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:12:12.360460 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-06 06:12:12.360471 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 06:12:12.360482 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 06:12:12.360493 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-06 06:12:12.360504 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 06:12:12.360531 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 06:12:12.360542 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-06 06:12:12.360553 | orchestrator | 2026-04-06 06:12:12.360564 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-06 06:12:12.360574 | orchestrator | Monday 06 April 2026 06:12:01 +0000 (0:00:08.412) 0:05:15.413 ********** 2026-04-06 06:12:12.360585 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:12:12.360596 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:12:12.360607 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:12:12.360617 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:12:12.360628 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:12:12.360639 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:12:12.360649 | orchestrator | 2026-04-06 06:12:12.360660 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-06 06:12:12.360671 | orchestrator | Monday 06 April 2026 06:12:03 +0000 (0:00:01.805) 0:05:17.219 ********** 2026-04-06 06:12:12.360682 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:12:12.360692 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:12:12.360703 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:12:12.360714 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:12:12.360724 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:12:12.360735 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:12:12.360745 | orchestrator | 2026-04-06 06:12:12.360756 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-06 06:12:12.360767 | orchestrator | Monday 06 April 2026 06:12:05 +0000 (0:00:02.083) 0:05:19.303 ********** 2026-04-06 06:12:12.360777 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:12:12.360788 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:12:12.360799 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:12:12.360810 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:12:12.360820 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:12:12.360831 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:12:12.360864 | orchestrator | 2026-04-06 06:12:12.360883 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-06 06:12:12.360895 | orchestrator | Monday 06 April 2026 06:12:08 +0000 (0:00:02.804) 0:05:22.108 ********** 2026-04-06 06:12:12.360906 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:12:12.360917 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:12:12.360927 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:12:12.360938 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:12:12.360949 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:12:12.360960 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:12:12.360971 | orchestrator | 2026-04-06 06:12:12.360982 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-06 06:12:12.360994 | orchestrator | Monday 06 April 2026 06:12:11 +0000 (0:00:03.084) 0:05:25.192 ********** 2026-04-06 06:12:12.361028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:12:12.361057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:12:13.276211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:12:13.276364 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:12:13.276383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:12:13.276419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:12:13.276448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:12:13.276461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:12:13.276472 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:12:13.276502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:12:13.276515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:12:13.276527 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:12:13.276550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:12:13.276562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:12:13.276573 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:12:13.276590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:12:13.276602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:12:13.276613 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:12:13.276632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:12:19.318512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:12:19.318652 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:12:19.318671 | orchestrator | 2026-04-06 06:12:19.318684 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-06 06:12:19.318697 | orchestrator | Monday 06 April 2026 06:12:14 +0000 (0:00:02.930) 0:05:28.123 ********** 2026-04-06 06:12:19.318710 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-06 06:12:19.318722 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-06 06:12:19.318733 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:12:19.318745 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-06 06:12:19.318756 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-06 06:12:19.318768 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:12:19.318779 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-06 06:12:19.318791 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-06 06:12:19.318802 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:12:19.318814 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-06 06:12:19.318826 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-06 06:12:19.318837 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:12:19.318849 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-06 06:12:19.318860 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-06 06:12:19.318871 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:12:19.318883 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-06 06:12:19.318894 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-06 06:12:19.318906 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:12:19.318918 | orchestrator | 2026-04-06 06:12:19.318930 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-06 06:12:19.318941 | orchestrator | Monday 06 April 2026 06:12:16 +0000 (0:00:02.106) 0:05:30.229 ********** 2026-04-06 06:12:19.318970 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:12:19.318985 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:12:19.319017 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-06 06:12:19.319041 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:12:19.319058 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:12:19.319074 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:12:19.319094 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-06 06:12:19.319108 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:12:19.319131 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-06 06:12:24.483654 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:12:24.483763 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:12:24.483780 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:12:24.483810 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-06 06:12:24.483823 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:12:24.483874 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 06:12:24.483888 | orchestrator | 2026-04-06 06:12:24.483902 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-06 06:12:24.483914 | orchestrator | Monday 06 April 2026 06:12:21 +0000 (0:00:04.964) 0:05:35.194 ********** 2026-04-06 06:12:24.483926 | orchestrator | ok: [testbed-node-3] => { 2026-04-06 06:12:24.483939 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:12:24.483950 | orchestrator | } 2026-04-06 06:12:24.483961 | orchestrator | ok: [testbed-node-4] => { 2026-04-06 06:12:24.483972 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:12:24.483983 | orchestrator | } 2026-04-06 06:12:24.483993 | orchestrator | ok: [testbed-node-5] => { 2026-04-06 06:12:24.484004 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:12:24.484015 | orchestrator | } 2026-04-06 06:12:24.484025 | orchestrator | ok: [testbed-node-0] => { 2026-04-06 06:12:24.484037 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:12:24.484048 | orchestrator | } 2026-04-06 06:12:24.484059 | orchestrator | ok: [testbed-node-1] => { 2026-04-06 06:12:24.484070 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:12:24.484081 | orchestrator | } 2026-04-06 06:12:24.484091 | orchestrator | ok: [testbed-node-2] => { 2026-04-06 06:12:24.484102 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:12:24.484113 | orchestrator | } 2026-04-06 06:12:24.484124 | orchestrator | 2026-04-06 06:12:24.484135 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:12:24.484146 | orchestrator | Monday 06 April 2026 06:12:23 +0000 (0:00:02.001) 0:05:37.196 ********** 2026-04-06 06:12:24.484159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:12:24.484177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:12:24.484194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:12:24.484216 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:12:24.484237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:12:28.695485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:12:28.695614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:12:28.695633 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:12:28.695666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-06 06:12:28.695704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-06 06:12:28.695717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-06 06:12:28.695728 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:12:28.695759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:12:28.695773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:12:28.695785 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:12:28.695796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:12:28.695813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:12:28.695833 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:12:28.695845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-06 06:12:28.695856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-06 06:12:28.695868 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:12:28.695879 | orchestrator | 2026-04-06 06:12:28.695891 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:12:28.695904 | orchestrator | Monday 06 April 2026 06:12:27 +0000 (0:00:03.708) 0:05:40.905 ********** 2026-04-06 06:12:28.695915 | orchestrator | 2026-04-06 06:12:28.695926 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:12:28.695936 | orchestrator | Monday 06 April 2026 06:12:27 +0000 (0:00:00.522) 0:05:41.427 ********** 2026-04-06 06:12:28.695947 | orchestrator | 2026-04-06 06:12:28.695958 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:12:28.695969 | orchestrator | Monday 06 April 2026 06:12:28 +0000 (0:00:00.560) 0:05:41.987 ********** 2026-04-06 06:12:28.695980 | orchestrator | 2026-04-06 06:12:28.695998 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:14:04.085923 | orchestrator | Monday 06 April 2026 06:12:29 +0000 (0:00:00.769) 0:05:42.757 ********** 2026-04-06 06:14:04.086154 | orchestrator | 2026-04-06 06:14:04.086188 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:14:04.086210 | orchestrator | Monday 06 April 2026 06:12:29 +0000 (0:00:00.542) 0:05:43.299 ********** 2026-04-06 06:14:04.086230 | orchestrator | 2026-04-06 06:14:04.086251 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-06 06:14:04.086271 | orchestrator | Monday 06 April 2026 06:12:30 +0000 (0:00:00.508) 0:05:43.808 ********** 2026-04-06 06:14:04.086292 | orchestrator | 2026-04-06 06:14:04.086312 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-06 06:14:04.086365 | orchestrator | 2026-04-06 06:14:04.086386 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-06 06:14:04.086407 | orchestrator | Monday 06 April 2026 06:12:32 +0000 (0:00:01.910) 0:05:45.719 ********** 2026-04-06 06:14:04.086427 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:04.086450 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:14:04.086471 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:14:04.086493 | orchestrator | 2026-04-06 06:14:04.086515 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-06 06:14:04.086537 | orchestrator | 2026-04-06 06:14:04.086559 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-06 06:14:04.086580 | orchestrator | Monday 06 April 2026 06:12:33 +0000 (0:00:01.687) 0:05:47.407 ********** 2026-04-06 06:14:04.086633 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:04.086654 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:14:04.086675 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:14:04.086695 | orchestrator | 2026-04-06 06:14:04.086715 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-06 06:14:04.086735 | orchestrator | 2026-04-06 06:14:04.086756 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-06 06:14:04.086776 | orchestrator | Monday 06 April 2026 06:12:36 +0000 (0:00:02.567) 0:05:49.974 ********** 2026-04-06 06:14:04.086796 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-06 06:14:04.086815 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-06 06:14:04.086835 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-06 06:14:04.086856 | orchestrator | changed: [testbed-node-0] => (item=nova-conductor) 2026-04-06 06:14:04.086875 | orchestrator | changed: [testbed-node-1] => (item=nova-conductor) 2026-04-06 06:14:04.086893 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-06 06:14:04.086911 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-06 06:14:04.086927 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-06 06:14:04.086945 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-06 06:14:04.086965 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-06 06:14:04.086985 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-06 06:14:04.087005 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-06 06:14:04.087042 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-06 06:14:04.087065 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-06 06:14:04.087085 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-06 06:14:04.087105 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-06 06:14:04.087125 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-06 06:14:04.087145 | orchestrator | changed: [testbed-node-2] => (item=nova-conductor) 2026-04-06 06:14:04.087166 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-06 06:14:04.087186 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-06 06:14:04.087206 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-06 06:14:04.087226 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-06 06:14:04.087247 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-06 06:14:04.087266 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-06 06:14:04.087286 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-06 06:14:04.087305 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-06 06:14:04.087351 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-06 06:14:04.087370 | orchestrator | changed: [testbed-node-0] => (item=nova-novncproxy) 2026-04-06 06:14:04.087389 | orchestrator | changed: [testbed-node-1] => (item=nova-novncproxy) 2026-04-06 06:14:04.087407 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-06 06:14:04.087424 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-06 06:14:04.087443 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-06 06:14:04.087464 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-06 06:14:04.087484 | orchestrator | changed: [testbed-node-2] => (item=nova-novncproxy) 2026-04-06 06:14:04.087503 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-06 06:14:04.087523 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-06 06:14:04.087558 | orchestrator | 2026-04-06 06:14:04.087579 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-06 06:14:04.087599 | orchestrator | 2026-04-06 06:14:04.087618 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-06 06:14:04.087639 | orchestrator | Monday 06 April 2026 06:13:12 +0000 (0:00:35.817) 0:06:25.791 ********** 2026-04-06 06:14:04.087659 | orchestrator | changed: [testbed-node-0] => (item=nova-scheduler) 2026-04-06 06:14:04.087704 | orchestrator | changed: [testbed-node-1] => (item=nova-scheduler) 2026-04-06 06:14:04.087726 | orchestrator | changed: [testbed-node-2] => (item=nova-scheduler) 2026-04-06 06:14:04.087746 | orchestrator | changed: [testbed-node-0] => (item=nova-api) 2026-04-06 06:14:04.087765 | orchestrator | changed: [testbed-node-1] => (item=nova-api) 2026-04-06 06:14:04.087786 | orchestrator | changed: [testbed-node-2] => (item=nova-api) 2026-04-06 06:14:04.087806 | orchestrator | 2026-04-06 06:14:04.087826 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-06 06:14:04.087847 | orchestrator | 2026-04-06 06:14:04.087867 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-06 06:14:04.087887 | orchestrator | Monday 06 April 2026 06:13:31 +0000 (0:00:19.786) 0:06:45.578 ********** 2026-04-06 06:14:04.087907 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:14:04.087928 | orchestrator | 2026-04-06 06:14:04.087948 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-06 06:14:04.087968 | orchestrator | 2026-04-06 06:14:04.087988 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-06 06:14:04.088008 | orchestrator | Monday 06 April 2026 06:13:49 +0000 (0:00:17.289) 0:07:02.867 ********** 2026-04-06 06:14:04.088028 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:14:04.088049 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:14:04.088068 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:14:04.088087 | orchestrator | 2026-04-06 06:14:04.088105 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:14:04.088121 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 06:14:04.088144 | orchestrator | testbed-node-0 : ok=39  changed=8  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-06 06:14:04.088164 | orchestrator | testbed-node-1 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-06 06:14:04.088185 | orchestrator | testbed-node-2 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-06 06:14:04.088205 | orchestrator | testbed-node-3 : ok=43  changed=5  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-06 06:14:04.088225 | orchestrator | testbed-node-4 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-06 06:14:04.088245 | orchestrator | testbed-node-5 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-06 06:14:04.088264 | orchestrator | 2026-04-06 06:14:04.088284 | orchestrator | 2026-04-06 06:14:04.088304 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:14:04.088356 | orchestrator | Monday 06 April 2026 06:14:03 +0000 (0:00:14.442) 0:07:17.310 ********** 2026-04-06 06:14:04.088378 | orchestrator | =============================================================================== 2026-04-06 06:14:04.088398 | orchestrator | nova-cell : Reload nova cell services to remove RPC version cap -------- 35.82s 2026-04-06 06:14:04.088418 | orchestrator | nova : Reload nova API services to remove RPC version pin -------------- 19.79s 2026-04-06 06:14:04.088438 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.36s 2026-04-06 06:14:04.088471 | orchestrator | nova : Run Nova upgrade checks ----------------------------------------- 18.81s 2026-04-06 06:14:04.088491 | orchestrator | nova : Run Nova API online database migrations ------------------------- 17.29s 2026-04-06 06:14:04.088511 | orchestrator | nova-cell : Run Nova cell online database migrations ------------------- 14.44s 2026-04-06 06:14:04.088531 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 13.63s 2026-04-06 06:14:04.088551 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.69s 2026-04-06 06:14:04.088571 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 9.22s 2026-04-06 06:14:04.088591 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.41s 2026-04-06 06:14:04.088611 | orchestrator | nova-cell : Copying over libvirt SASL configuration --------------------- 7.42s 2026-04-06 06:14:04.088631 | orchestrator | nova-cell : Get container facts ----------------------------------------- 5.38s 2026-04-06 06:14:04.088651 | orchestrator | nova-cell : Copying over libvirt configuration -------------------------- 5.34s 2026-04-06 06:14:04.088672 | orchestrator | service-check-containers : nova_cell | Check containers ----------------- 4.96s 2026-04-06 06:14:04.088722 | orchestrator | nova-cell : Copy over ceph.conf ----------------------------------------- 4.96s 2026-04-06 06:14:04.088769 | orchestrator | nova-cell : Get current Libvirt version --------------------------------- 4.95s 2026-04-06 06:14:04.088790 | orchestrator | service-check-containers : nova | Check containers ---------------------- 4.88s 2026-04-06 06:14:04.088810 | orchestrator | nova-cell : Flush handlers ---------------------------------------------- 4.81s 2026-04-06 06:14:04.088831 | orchestrator | nova-cell : Copying over config.json files for services ----------------- 4.72s 2026-04-06 06:14:04.088851 | orchestrator | service-cert-copy : nova | Copying over extra CA certificates ----------- 4.66s 2026-04-06 06:14:04.270586 | orchestrator | + osism apply -a upgrade horizon 2026-04-06 06:14:05.573901 | orchestrator | 2026-04-06 06:14:05 | INFO  | Prepare task for execution of horizon. 2026-04-06 06:14:05.638600 | orchestrator | 2026-04-06 06:14:05 | INFO  | Task 71ea0675-08c5-494f-ace6-ac45da7fc3f1 (horizon) was prepared for execution. 2026-04-06 06:14:05.638699 | orchestrator | 2026-04-06 06:14:05 | INFO  | It takes a moment until task 71ea0675-08c5-494f-ace6-ac45da7fc3f1 (horizon) has been started and output is visible here. 2026-04-06 06:14:20.380078 | orchestrator | 2026-04-06 06:14:20.380201 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:14:20.380226 | orchestrator | 2026-04-06 06:14:20.380246 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:14:20.380264 | orchestrator | Monday 06 April 2026 06:14:10 +0000 (0:00:01.632) 0:00:01.632 ********** 2026-04-06 06:14:20.380282 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:14:20.380300 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:14:20.380316 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:14:20.380396 | orchestrator | 2026-04-06 06:14:20.380416 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:14:20.380434 | orchestrator | Monday 06 April 2026 06:14:12 +0000 (0:00:01.796) 0:00:03.429 ********** 2026-04-06 06:14:20.380450 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-06 06:14:20.380465 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-06 06:14:20.380482 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-06 06:14:20.380499 | orchestrator | 2026-04-06 06:14:20.380518 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-06 06:14:20.380537 | orchestrator | 2026-04-06 06:14:20.380556 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-06 06:14:20.380574 | orchestrator | Monday 06 April 2026 06:14:14 +0000 (0:00:01.804) 0:00:05.234 ********** 2026-04-06 06:14:20.380594 | orchestrator | included: /ansible/roles/horizon/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:14:20.380645 | orchestrator | 2026-04-06 06:14:20.380665 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-06 06:14:20.380678 | orchestrator | Monday 06 April 2026 06:14:17 +0000 (0:00:03.163) 0:00:08.397 ********** 2026-04-06 06:14:20.380718 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 06:14:20.380765 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 06:14:20.380799 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 06:14:20.380813 | orchestrator | 2026-04-06 06:14:20.380826 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-06 06:14:20.380839 | orchestrator | Monday 06 April 2026 06:14:20 +0000 (0:00:02.675) 0:00:11.073 ********** 2026-04-06 06:14:20.380852 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:14:20.380865 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:14:20.380878 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:14:20.380889 | orchestrator | 2026-04-06 06:14:20.380908 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-06 06:14:46.514796 | orchestrator | Monday 06 April 2026 06:14:21 +0000 (0:00:01.348) 0:00:12.422 ********** 2026-04-06 06:14:46.514906 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-06 06:14:46.514921 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-06 06:14:46.514932 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-06 06:14:46.514942 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-06 06:14:46.514952 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-06 06:14:46.514987 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-06 06:14:46.514997 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-06 06:14:46.515007 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-06 06:14:46.515017 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-06 06:14:46.515026 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-06 06:14:46.515036 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-06 06:14:46.515046 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-06 06:14:46.515055 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-06 06:14:46.515065 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-06 06:14:46.515074 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-06 06:14:46.515084 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-06 06:14:46.515093 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-06 06:14:46.515103 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-06 06:14:46.515112 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-06 06:14:46.515121 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-06 06:14:46.515146 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-06 06:14:46.515156 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-06 06:14:46.515165 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-06 06:14:46.515175 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-06 06:14:46.515186 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-06 06:14:46.515197 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-06 06:14:46.515207 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-06 06:14:46.515217 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-06 06:14:46.515226 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-06 06:14:46.515236 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-06 06:14:46.515245 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-06 06:14:46.515255 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-06 06:14:46.515265 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-06 06:14:46.515276 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-06 06:14:46.515293 | orchestrator | 2026-04-06 06:14:46.515303 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 06:14:46.515313 | orchestrator | Monday 06 April 2026 06:14:23 +0000 (0:00:02.084) 0:00:14.506 ********** 2026-04-06 06:14:46.515323 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:14:46.515333 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:14:46.515372 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:14:46.515384 | orchestrator | 2026-04-06 06:14:46.515411 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 06:14:46.515423 | orchestrator | Monday 06 April 2026 06:14:24 +0000 (0:00:01.429) 0:00:15.935 ********** 2026-04-06 06:14:46.515434 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.515447 | orchestrator | 2026-04-06 06:14:46.515459 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 06:14:46.515470 | orchestrator | Monday 06 April 2026 06:14:26 +0000 (0:00:01.222) 0:00:17.157 ********** 2026-04-06 06:14:46.515481 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.515492 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:14:46.515503 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:14:46.515514 | orchestrator | 2026-04-06 06:14:46.515527 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 06:14:46.515538 | orchestrator | Monday 06 April 2026 06:14:27 +0000 (0:00:01.302) 0:00:18.459 ********** 2026-04-06 06:14:46.515549 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:14:46.515561 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:14:46.515572 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:14:46.515583 | orchestrator | 2026-04-06 06:14:46.515594 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 06:14:46.515605 | orchestrator | Monday 06 April 2026 06:14:28 +0000 (0:00:01.516) 0:00:19.976 ********** 2026-04-06 06:14:46.515616 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.515628 | orchestrator | 2026-04-06 06:14:46.515638 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 06:14:46.515650 | orchestrator | Monday 06 April 2026 06:14:30 +0000 (0:00:01.130) 0:00:21.107 ********** 2026-04-06 06:14:46.515661 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.515672 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:14:46.515684 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:14:46.515695 | orchestrator | 2026-04-06 06:14:46.515704 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 06:14:46.515714 | orchestrator | Monday 06 April 2026 06:14:31 +0000 (0:00:01.369) 0:00:22.477 ********** 2026-04-06 06:14:46.515723 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:14:46.515733 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:14:46.515742 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:14:46.515752 | orchestrator | 2026-04-06 06:14:46.515762 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 06:14:46.515771 | orchestrator | Monday 06 April 2026 06:14:32 +0000 (0:00:01.343) 0:00:23.820 ********** 2026-04-06 06:14:46.515781 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.515790 | orchestrator | 2026-04-06 06:14:46.515800 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 06:14:46.515809 | orchestrator | Monday 06 April 2026 06:14:33 +0000 (0:00:01.141) 0:00:24.961 ********** 2026-04-06 06:14:46.515824 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.515834 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:14:46.515844 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:14:46.515854 | orchestrator | 2026-04-06 06:14:46.515863 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 06:14:46.515873 | orchestrator | Monday 06 April 2026 06:14:35 +0000 (0:00:01.620) 0:00:26.582 ********** 2026-04-06 06:14:46.515882 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:14:46.515892 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:14:46.515908 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:14:46.515918 | orchestrator | 2026-04-06 06:14:46.515928 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 06:14:46.515937 | orchestrator | Monday 06 April 2026 06:14:36 +0000 (0:00:01.337) 0:00:27.920 ********** 2026-04-06 06:14:46.515947 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.515956 | orchestrator | 2026-04-06 06:14:46.515966 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 06:14:46.515975 | orchestrator | Monday 06 April 2026 06:14:38 +0000 (0:00:01.143) 0:00:29.063 ********** 2026-04-06 06:14:46.515985 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.515995 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:14:46.516004 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:14:46.516014 | orchestrator | 2026-04-06 06:14:46.516023 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 06:14:46.516033 | orchestrator | Monday 06 April 2026 06:14:39 +0000 (0:00:01.326) 0:00:30.389 ********** 2026-04-06 06:14:46.516042 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:14:46.516052 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:14:46.516062 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:14:46.516071 | orchestrator | 2026-04-06 06:14:46.516081 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 06:14:46.516090 | orchestrator | Monday 06 April 2026 06:14:40 +0000 (0:00:01.589) 0:00:31.979 ********** 2026-04-06 06:14:46.516100 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.516109 | orchestrator | 2026-04-06 06:14:46.516119 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 06:14:46.516128 | orchestrator | Monday 06 April 2026 06:14:42 +0000 (0:00:01.161) 0:00:33.141 ********** 2026-04-06 06:14:46.516138 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.516147 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:14:46.516157 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:14:46.516166 | orchestrator | 2026-04-06 06:14:46.516176 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 06:14:46.516185 | orchestrator | Monday 06 April 2026 06:14:43 +0000 (0:00:01.352) 0:00:34.493 ********** 2026-04-06 06:14:46.516195 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:14:46.516204 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:14:46.516214 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:14:46.516223 | orchestrator | 2026-04-06 06:14:46.516233 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 06:14:46.516242 | orchestrator | Monday 06 April 2026 06:14:44 +0000 (0:00:01.465) 0:00:35.959 ********** 2026-04-06 06:14:46.516252 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.516262 | orchestrator | 2026-04-06 06:14:46.516271 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 06:14:46.516281 | orchestrator | Monday 06 April 2026 06:14:46 +0000 (0:00:01.131) 0:00:37.090 ********** 2026-04-06 06:14:46.516290 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:14:46.516300 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:14:46.516316 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:15:20.418654 | orchestrator | 2026-04-06 06:15:20.418787 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 06:15:20.418819 | orchestrator | Monday 06 April 2026 06:14:47 +0000 (0:00:01.546) 0:00:38.637 ********** 2026-04-06 06:15:20.418837 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:15:20.418858 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:15:20.418877 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:15:20.418897 | orchestrator | 2026-04-06 06:15:20.418915 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 06:15:20.418936 | orchestrator | Monday 06 April 2026 06:14:48 +0000 (0:00:01.407) 0:00:40.045 ********** 2026-04-06 06:15:20.418948 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:20.418959 | orchestrator | 2026-04-06 06:15:20.418970 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 06:15:20.419006 | orchestrator | Monday 06 April 2026 06:14:50 +0000 (0:00:01.167) 0:00:41.212 ********** 2026-04-06 06:15:20.419017 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:20.419028 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:15:20.419039 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:15:20.419050 | orchestrator | 2026-04-06 06:15:20.419061 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 06:15:20.419072 | orchestrator | Monday 06 April 2026 06:14:51 +0000 (0:00:01.460) 0:00:42.673 ********** 2026-04-06 06:15:20.419082 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:15:20.419093 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:15:20.419104 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:15:20.419115 | orchestrator | 2026-04-06 06:15:20.419126 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 06:15:20.419136 | orchestrator | Monday 06 April 2026 06:14:52 +0000 (0:00:01.369) 0:00:44.042 ********** 2026-04-06 06:15:20.419147 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:20.419158 | orchestrator | 2026-04-06 06:15:20.419169 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 06:15:20.419180 | orchestrator | Monday 06 April 2026 06:14:54 +0000 (0:00:01.125) 0:00:45.168 ********** 2026-04-06 06:15:20.419190 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:20.419204 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:15:20.419216 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:15:20.419229 | orchestrator | 2026-04-06 06:15:20.419241 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 06:15:20.419254 | orchestrator | Monday 06 April 2026 06:14:55 +0000 (0:00:01.394) 0:00:46.563 ********** 2026-04-06 06:15:20.419267 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:15:20.419279 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:15:20.419292 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:15:20.419304 | orchestrator | 2026-04-06 06:15:20.419332 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 06:15:20.419344 | orchestrator | Monday 06 April 2026 06:14:56 +0000 (0:00:01.326) 0:00:47.889 ********** 2026-04-06 06:15:20.419358 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:20.419422 | orchestrator | 2026-04-06 06:15:20.419436 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 06:15:20.419447 | orchestrator | Monday 06 April 2026 06:14:57 +0000 (0:00:01.106) 0:00:48.996 ********** 2026-04-06 06:15:20.419458 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:20.419468 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:15:20.419479 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:15:20.419490 | orchestrator | 2026-04-06 06:15:20.419500 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-06 06:15:20.419511 | orchestrator | Monday 06 April 2026 06:14:59 +0000 (0:00:01.410) 0:00:50.406 ********** 2026-04-06 06:15:20.419522 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:15:20.419532 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:15:20.419544 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:15:20.419554 | orchestrator | 2026-04-06 06:15:20.419565 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-06 06:15:20.419576 | orchestrator | Monday 06 April 2026 06:15:00 +0000 (0:00:01.370) 0:00:51.777 ********** 2026-04-06 06:15:20.419594 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:20.419612 | orchestrator | 2026-04-06 06:15:20.419631 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-06 06:15:20.419649 | orchestrator | Monday 06 April 2026 06:15:01 +0000 (0:00:01.107) 0:00:52.884 ********** 2026-04-06 06:15:20.419666 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:20.419684 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:15:20.419703 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:15:20.419722 | orchestrator | 2026-04-06 06:15:20.419739 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-06 06:15:20.419758 | orchestrator | Monday 06 April 2026 06:15:03 +0000 (0:00:01.371) 0:00:54.256 ********** 2026-04-06 06:15:20.419789 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:15:20.419807 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:15:20.419826 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:15:20.419845 | orchestrator | 2026-04-06 06:15:20.419864 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-06 06:15:20.419882 | orchestrator | Monday 06 April 2026 06:15:06 +0000 (0:00:02.894) 0:00:57.150 ********** 2026-04-06 06:15:20.419902 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-06 06:15:20.419914 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-06 06:15:20.419924 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-06 06:15:20.419935 | orchestrator | 2026-04-06 06:15:20.419946 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-06 06:15:20.419957 | orchestrator | Monday 06 April 2026 06:15:08 +0000 (0:00:02.748) 0:00:59.899 ********** 2026-04-06 06:15:20.419968 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-06 06:15:20.419988 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-06 06:15:20.420030 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-06 06:15:20.420050 | orchestrator | 2026-04-06 06:15:20.420068 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-06 06:15:20.420085 | orchestrator | Monday 06 April 2026 06:15:11 +0000 (0:00:02.754) 0:01:02.653 ********** 2026-04-06 06:15:20.420103 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-06 06:15:20.420121 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-06 06:15:20.420136 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-06 06:15:20.420155 | orchestrator | 2026-04-06 06:15:20.420171 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-06 06:15:20.420189 | orchestrator | Monday 06 April 2026 06:15:14 +0000 (0:00:02.534) 0:01:05.188 ********** 2026-04-06 06:15:20.420206 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:20.420223 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:15:20.420241 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:15:20.420259 | orchestrator | 2026-04-06 06:15:20.420276 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-06 06:15:20.420294 | orchestrator | Monday 06 April 2026 06:15:15 +0000 (0:00:01.392) 0:01:06.580 ********** 2026-04-06 06:15:20.420312 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:20.420330 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:15:20.420349 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:15:20.420396 | orchestrator | 2026-04-06 06:15:20.420414 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-06 06:15:20.420432 | orchestrator | Monday 06 April 2026 06:15:17 +0000 (0:00:01.598) 0:01:08.179 ********** 2026-04-06 06:15:20.420449 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:15:20.420468 | orchestrator | 2026-04-06 06:15:20.420489 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-06 06:15:20.420508 | orchestrator | Monday 06 April 2026 06:15:18 +0000 (0:00:01.789) 0:01:09.968 ********** 2026-04-06 06:15:20.420549 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 06:15:20.420621 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 06:15:22.253898 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 06:15:22.254006 | orchestrator | 2026-04-06 06:15:22.254089 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-06 06:15:22.254103 | orchestrator | Monday 06 April 2026 06:15:21 +0000 (0:00:02.721) 0:01:12.690 ********** 2026-04-06 06:15:22.254154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 06:15:22.254193 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:22.254206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 06:15:22.254219 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:15:22.254246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 06:15:27.024193 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:15:27.024294 | orchestrator | 2026-04-06 06:15:27.024308 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-06 06:15:27.024318 | orchestrator | Monday 06 April 2026 06:15:23 +0000 (0:00:01.700) 0:01:14.390 ********** 2026-04-06 06:15:27.024331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 06:15:27.024345 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:27.024432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 06:15:27.024465 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:15:27.024475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 06:15:27.024491 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:15:27.024500 | orchestrator | 2026-04-06 06:15:27.024514 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-06 06:15:27.024523 | orchestrator | Monday 06 April 2026 06:15:25 +0000 (0:00:02.125) 0:01:16.516 ********** 2026-04-06 06:15:27.024541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 06:15:30.061822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 06:15:30.062081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-06 06:15:30.062110 | orchestrator | 2026-04-06 06:15:30.062124 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-06 06:15:30.062137 | orchestrator | Monday 06 April 2026 06:15:28 +0000 (0:00:02.755) 0:01:19.272 ********** 2026-04-06 06:15:30.062149 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:15:30.062162 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:15:30.062209 | orchestrator | } 2026-04-06 06:15:30.062221 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:15:30.062232 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:15:30.062243 | orchestrator | } 2026-04-06 06:15:30.062254 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:15:30.062275 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:15:30.062286 | orchestrator | } 2026-04-06 06:15:30.062297 | orchestrator | 2026-04-06 06:15:30.062308 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:15:30.062319 | orchestrator | Monday 06 April 2026 06:15:29 +0000 (0:00:01.354) 0:01:20.627 ********** 2026-04-06 06:15:30.062340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 06:15:30.062354 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:15:30.062406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 06:16:42.765896 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:16:42.766190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-06 06:16:42.766234 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:16:42.766255 | orchestrator | 2026-04-06 06:16:42.766274 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-06 06:16:42.766296 | orchestrator | Monday 06 April 2026 06:15:31 +0000 (0:00:02.220) 0:01:22.847 ********** 2026-04-06 06:16:42.766314 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:16:42.766333 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:16:42.766352 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:16:42.766371 | orchestrator | 2026-04-06 06:16:42.766390 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-06 06:16:42.766439 | orchestrator | Monday 06 April 2026 06:15:33 +0000 (0:00:01.337) 0:01:24.185 ********** 2026-04-06 06:16:42.766463 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:16:42.766515 | orchestrator | 2026-04-06 06:16:42.766535 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-06 06:16:42.766554 | orchestrator | Monday 06 April 2026 06:15:34 +0000 (0:00:01.691) 0:01:25.876 ********** 2026-04-06 06:16:42.766571 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:16:42.766591 | orchestrator | 2026-04-06 06:16:42.766609 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-06 06:16:42.766628 | orchestrator | Monday 06 April 2026 06:16:10 +0000 (0:00:35.581) 0:02:01.457 ********** 2026-04-06 06:16:42.766647 | orchestrator | 2026-04-06 06:16:42.766666 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-06 06:16:42.766684 | orchestrator | Monday 06 April 2026 06:16:11 +0000 (0:00:00.632) 0:02:02.090 ********** 2026-04-06 06:16:42.766703 | orchestrator | 2026-04-06 06:16:42.766721 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-06 06:16:42.766739 | orchestrator | Monday 06 April 2026 06:16:11 +0000 (0:00:00.448) 0:02:02.539 ********** 2026-04-06 06:16:42.766758 | orchestrator | 2026-04-06 06:16:42.766775 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-06 06:16:42.766793 | orchestrator | Monday 06 April 2026 06:16:12 +0000 (0:00:00.807) 0:02:03.347 ********** 2026-04-06 06:16:42.766810 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:16:42.766828 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:16:42.766845 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:16:42.766863 | orchestrator | 2026-04-06 06:16:42.766880 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:16:42.766899 | orchestrator | testbed-node-0 : ok=36  changed=6  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-04-06 06:16:42.766947 | orchestrator | testbed-node-1 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-06 06:16:42.766968 | orchestrator | testbed-node-2 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-06 06:16:42.766987 | orchestrator | 2026-04-06 06:16:42.767006 | orchestrator | 2026-04-06 06:16:42.767037 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:16:42.767057 | orchestrator | Monday 06 April 2026 06:16:42 +0000 (0:00:30.062) 0:02:33.409 ********** 2026-04-06 06:16:42.767075 | orchestrator | =============================================================================== 2026-04-06 06:16:42.767094 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 35.58s 2026-04-06 06:16:42.767112 | orchestrator | horizon : Restart horizon container ------------------------------------ 30.06s 2026-04-06 06:16:42.767132 | orchestrator | horizon : include_tasks ------------------------------------------------- 3.16s 2026-04-06 06:16:42.767150 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.89s 2026-04-06 06:16:42.767167 | orchestrator | service-check-containers : horizon | Check containers ------------------- 2.76s 2026-04-06 06:16:42.767185 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.75s 2026-04-06 06:16:42.767202 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.75s 2026-04-06 06:16:42.767219 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.72s 2026-04-06 06:16:42.767234 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 2.68s 2026-04-06 06:16:42.767253 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.54s 2026-04-06 06:16:42.767271 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.22s 2026-04-06 06:16:42.767289 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 2.13s 2026-04-06 06:16:42.767308 | orchestrator | horizon : include_tasks ------------------------------------------------- 2.08s 2026-04-06 06:16:42.767347 | orchestrator | horizon : Flush handlers ------------------------------------------------ 1.89s 2026-04-06 06:16:42.767362 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.80s 2026-04-06 06:16:42.767380 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.80s 2026-04-06 06:16:42.767399 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.79s 2026-04-06 06:16:42.767449 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.70s 2026-04-06 06:16:42.767468 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.69s 2026-04-06 06:16:42.767485 | orchestrator | horizon : Update custom policy file name -------------------------------- 1.62s 2026-04-06 06:16:42.938689 | orchestrator | + osism apply -a upgrade skyline 2026-04-06 06:16:44.286982 | orchestrator | 2026-04-06 06:16:44 | INFO  | Prepare task for execution of skyline. 2026-04-06 06:16:44.366395 | orchestrator | 2026-04-06 06:16:44 | INFO  | Task ef37b0ec-1ebf-40aa-ab53-dba3a4528d08 (skyline) was prepared for execution. 2026-04-06 06:16:44.366593 | orchestrator | 2026-04-06 06:16:44 | INFO  | It takes a moment until task ef37b0ec-1ebf-40aa-ab53-dba3a4528d08 (skyline) has been started and output is visible here. 2026-04-06 06:17:02.995075 | orchestrator | 2026-04-06 06:17:02.995195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:17:02.995212 | orchestrator | 2026-04-06 06:17:02.995224 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:17:02.995236 | orchestrator | Monday 06 April 2026 06:16:49 +0000 (0:00:01.476) 0:00:01.476 ********** 2026-04-06 06:17:02.995248 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:17:02.995260 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:17:02.995271 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:17:02.995282 | orchestrator | 2026-04-06 06:17:02.995293 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:17:02.995305 | orchestrator | Monday 06 April 2026 06:16:51 +0000 (0:00:02.011) 0:00:03.487 ********** 2026-04-06 06:17:02.995316 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-06 06:17:02.995327 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-06 06:17:02.995338 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-06 06:17:02.995349 | orchestrator | 2026-04-06 06:17:02.995360 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-06 06:17:02.995371 | orchestrator | 2026-04-06 06:17:02.995382 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-06 06:17:02.995393 | orchestrator | Monday 06 April 2026 06:16:54 +0000 (0:00:02.739) 0:00:06.227 ********** 2026-04-06 06:17:02.995405 | orchestrator | included: /ansible/roles/skyline/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:17:02.995416 | orchestrator | 2026-04-06 06:17:02.995511 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-06 06:17:02.995527 | orchestrator | Monday 06 April 2026 06:16:56 +0000 (0:00:02.926) 0:00:09.153 ********** 2026-04-06 06:17:02.995562 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:02.995603 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:02.995640 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:02.995657 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:02.995678 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:02.995701 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:02.995714 | orchestrator | 2026-04-06 06:17:02.995728 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-06 06:17:02.995741 | orchestrator | Monday 06 April 2026 06:16:59 +0000 (0:00:02.762) 0:00:11.916 ********** 2026-04-06 06:17:02.995754 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:17:02.995767 | orchestrator | 2026-04-06 06:17:02.995781 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-06 06:17:02.995794 | orchestrator | Monday 06 April 2026 06:17:01 +0000 (0:00:01.851) 0:00:13.767 ********** 2026-04-06 06:17:02.995817 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:05.527982 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:05.528158 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:05.528189 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:05.528236 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:05.528264 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:05.528294 | orchestrator | 2026-04-06 06:17:05.528313 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-06 06:17:05.528332 | orchestrator | Monday 06 April 2026 06:17:05 +0000 (0:00:03.420) 0:00:17.188 ********** 2026-04-06 06:17:05.528351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 06:17:05.528369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 06:17:05.528388 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:17:05.528448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 06:17:07.227983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 06:17:07.228085 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:17:07.228096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 06:17:07.228105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 06:17:07.228112 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:17:07.228120 | orchestrator | 2026-04-06 06:17:07.228126 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-06 06:17:07.228133 | orchestrator | Monday 06 April 2026 06:17:06 +0000 (0:00:01.672) 0:00:18.860 ********** 2026-04-06 06:17:07.228156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 06:17:07.228172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 06:17:07.228178 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:17:07.228185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 06:17:07.228192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 06:17:07.228199 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:17:07.228212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 06:17:15.869963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 06:17:15.870139 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:17:15.870159 | orchestrator | 2026-04-06 06:17:15.870171 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-06 06:17:15.870183 | orchestrator | Monday 06 April 2026 06:17:08 +0000 (0:00:01.736) 0:00:20.597 ********** 2026-04-06 06:17:15.870196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:15.870210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:15.870266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:15.870282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:15.870294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:15.870306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:15.870325 | orchestrator | 2026-04-06 06:17:15.870337 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-06 06:17:15.870348 | orchestrator | Monday 06 April 2026 06:17:11 +0000 (0:00:03.519) 0:00:24.116 ********** 2026-04-06 06:17:15.870359 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-06 06:17:15.870370 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-06 06:17:15.870381 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-06 06:17:15.870391 | orchestrator | 2026-04-06 06:17:15.870402 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-06 06:17:15.870413 | orchestrator | Monday 06 April 2026 06:17:14 +0000 (0:00:02.540) 0:00:26.656 ********** 2026-04-06 06:17:15.870472 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-06 06:17:24.477881 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-06 06:17:24.477992 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-06 06:17:24.478009 | orchestrator | 2026-04-06 06:17:24.478099 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-06 06:17:24.478113 | orchestrator | Monday 06 April 2026 06:17:17 +0000 (0:00:02.990) 0:00:29.648 ********** 2026-04-06 06:17:24.478130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:24.478148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:24.478161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:24.478222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:24.478237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:24.478250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:24.478269 | orchestrator | 2026-04-06 06:17:24.478281 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-06 06:17:24.478293 | orchestrator | Monday 06 April 2026 06:17:21 +0000 (0:00:03.807) 0:00:33.455 ********** 2026-04-06 06:17:24.478304 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:17:24.478316 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:17:24.478327 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:17:24.478338 | orchestrator | 2026-04-06 06:17:24.478348 | orchestrator | TASK [service-check-containers : skyline | Check containers] ******************* 2026-04-06 06:17:24.478359 | orchestrator | Monday 06 April 2026 06:17:23 +0000 (0:00:01.723) 0:00:35.179 ********** 2026-04-06 06:17:24.478371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:24.478398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:28.443971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-06 06:17:28.444074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:28.444117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:28.444159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-06 06:17:28.444173 | orchestrator | 2026-04-06 06:17:28.444184 | orchestrator | TASK [service-check-containers : skyline | Notify handlers to restart containers] *** 2026-04-06 06:17:28.444196 | orchestrator | Monday 06 April 2026 06:17:26 +0000 (0:00:03.528) 0:00:38.708 ********** 2026-04-06 06:17:28.444207 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:17:28.444217 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:17:28.444227 | orchestrator | } 2026-04-06 06:17:28.444237 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:17:28.444247 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:17:28.444257 | orchestrator | } 2026-04-06 06:17:28.444267 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:17:28.444276 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:17:28.444286 | orchestrator | } 2026-04-06 06:17:28.444321 | orchestrator | 2026-04-06 06:17:28.444331 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:17:28.444349 | orchestrator | Monday 06 April 2026 06:17:27 +0000 (0:00:01.393) 0:00:40.102 ********** 2026-04-06 06:17:28.444360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 06:17:28.444372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 06:17:28.444384 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:17:28.444399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 06:17:28.444420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 06:18:09.519383 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:18:09.519504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-06 06:18:09.519515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-06 06:18:09.519521 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:18:09.519525 | orchestrator | 2026-04-06 06:18:09.519530 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-06 06:18:09.519535 | orchestrator | Monday 06 April 2026 06:17:30 +0000 (0:00:02.078) 0:00:42.180 ********** 2026-04-06 06:18:09.519539 | orchestrator | 2026-04-06 06:18:09.519543 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-06 06:18:09.519557 | orchestrator | Monday 06 April 2026 06:17:30 +0000 (0:00:00.496) 0:00:42.677 ********** 2026-04-06 06:18:09.519561 | orchestrator | 2026-04-06 06:18:09.519565 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-06 06:18:09.519568 | orchestrator | Monday 06 April 2026 06:17:30 +0000 (0:00:00.452) 0:00:43.130 ********** 2026-04-06 06:18:09.519572 | orchestrator | 2026-04-06 06:18:09.519576 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-06 06:18:09.519580 | orchestrator | Monday 06 April 2026 06:17:31 +0000 (0:00:00.793) 0:00:43.923 ********** 2026-04-06 06:18:09.519584 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:18:09.519588 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:18:09.519592 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:18:09.519596 | orchestrator | 2026-04-06 06:18:09.519599 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-06 06:18:09.519603 | orchestrator | Monday 06 April 2026 06:17:46 +0000 (0:00:14.750) 0:00:58.674 ********** 2026-04-06 06:18:09.519621 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:18:09.519625 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:18:09.519629 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:18:09.519633 | orchestrator | 2026-04-06 06:18:09.519636 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:18:09.519641 | orchestrator | testbed-node-0 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 06:18:09.519647 | orchestrator | testbed-node-1 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 06:18:09.519651 | orchestrator | testbed-node-2 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 06:18:09.519654 | orchestrator | 2026-04-06 06:18:09.519658 | orchestrator | 2026-04-06 06:18:09.519662 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:18:09.519666 | orchestrator | Monday 06 April 2026 06:18:09 +0000 (0:00:22.662) 0:01:21.337 ********** 2026-04-06 06:18:09.519670 | orchestrator | =============================================================================== 2026-04-06 06:18:09.519683 | orchestrator | skyline : Restart skyline-console container ---------------------------- 22.66s 2026-04-06 06:18:09.519687 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 14.75s 2026-04-06 06:18:09.519691 | orchestrator | skyline : Copying over config.json files for services ------------------- 3.81s 2026-04-06 06:18:09.519695 | orchestrator | service-check-containers : skyline | Check containers ------------------- 3.52s 2026-04-06 06:18:09.519699 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 3.52s 2026-04-06 06:18:09.519703 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 3.42s 2026-04-06 06:18:09.519707 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.99s 2026-04-06 06:18:09.519711 | orchestrator | skyline : include_tasks ------------------------------------------------- 2.93s 2026-04-06 06:18:09.519715 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 2.76s 2026-04-06 06:18:09.519719 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.74s 2026-04-06 06:18:09.519722 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 2.54s 2026-04-06 06:18:09.519726 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.08s 2026-04-06 06:18:09.519730 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.01s 2026-04-06 06:18:09.519734 | orchestrator | skyline : include_tasks ------------------------------------------------- 1.85s 2026-04-06 06:18:09.519738 | orchestrator | skyline : Flush handlers ------------------------------------------------ 1.74s 2026-04-06 06:18:09.519742 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.74s 2026-04-06 06:18:09.519745 | orchestrator | skyline : Copying over custom logos ------------------------------------- 1.73s 2026-04-06 06:18:09.519749 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS certificate --- 1.67s 2026-04-06 06:18:09.519754 | orchestrator | service-check-containers : skyline | Notify handlers to restart containers --- 1.39s 2026-04-06 06:18:09.693685 | orchestrator | + osism apply -a upgrade glance 2026-04-06 06:18:11.066611 | orchestrator | 2026-04-06 06:18:11 | INFO  | Prepare task for execution of glance. 2026-04-06 06:18:11.131763 | orchestrator | 2026-04-06 06:18:11 | INFO  | Task d322ea90-f42b-42b5-bff3-1e929f569a7b (glance) was prepared for execution. 2026-04-06 06:18:11.131831 | orchestrator | 2026-04-06 06:18:11 | INFO  | It takes a moment until task d322ea90-f42b-42b5-bff3-1e929f569a7b (glance) has been started and output is visible here. 2026-04-06 06:18:56.741925 | orchestrator | 2026-04-06 06:18:56.742011 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:18:56.742078 | orchestrator | 2026-04-06 06:18:56.742083 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:18:56.742088 | orchestrator | Monday 06 April 2026 06:18:16 +0000 (0:00:01.644) 0:00:01.644 ********** 2026-04-06 06:18:56.742093 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:18:56.742099 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:18:56.742104 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:18:56.742126 | orchestrator | 2026-04-06 06:18:56.742131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:18:56.742136 | orchestrator | Monday 06 April 2026 06:18:17 +0000 (0:00:01.889) 0:00:03.534 ********** 2026-04-06 06:18:56.742141 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-06 06:18:56.742155 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-06 06:18:56.742160 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-06 06:18:56.742165 | orchestrator | 2026-04-06 06:18:56.742170 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-06 06:18:56.742175 | orchestrator | 2026-04-06 06:18:56.742180 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 06:18:56.742184 | orchestrator | Monday 06 April 2026 06:18:20 +0000 (0:00:02.416) 0:00:05.950 ********** 2026-04-06 06:18:56.742189 | orchestrator | included: /ansible/roles/glance/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:18:56.742195 | orchestrator | 2026-04-06 06:18:56.742200 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 06:18:56.742204 | orchestrator | Monday 06 April 2026 06:18:23 +0000 (0:00:03.130) 0:00:09.081 ********** 2026-04-06 06:18:56.742209 | orchestrator | included: /ansible/roles/glance/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:18:56.742214 | orchestrator | 2026-04-06 06:18:56.742219 | orchestrator | TASK [glance : Start Glance upgrade] ******************************************* 2026-04-06 06:18:56.742223 | orchestrator | Monday 06 April 2026 06:18:25 +0000 (0:00:01.938) 0:00:11.020 ********** 2026-04-06 06:18:56.742228 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:18:56.742233 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:18:56.742237 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:18:56.742242 | orchestrator | 2026-04-06 06:18:56.742247 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 06:18:56.742251 | orchestrator | Monday 06 April 2026 06:18:26 +0000 (0:00:01.327) 0:00:12.348 ********** 2026-04-06 06:18:56.742256 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:18:56.742261 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:18:56.742266 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-0 2026-04-06 06:18:56.742271 | orchestrator | 2026-04-06 06:18:56.742276 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-06 06:18:56.742280 | orchestrator | Monday 06 April 2026 06:18:28 +0000 (0:00:01.699) 0:00:14.048 ********** 2026-04-06 06:18:56.742288 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:18:56.742300 | orchestrator | 2026-04-06 06:18:56.742305 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 06:18:56.742310 | orchestrator | Monday 06 April 2026 06:18:33 +0000 (0:00:04.741) 0:00:18.790 ********** 2026-04-06 06:18:56.742314 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0 2026-04-06 06:18:56.742319 | orchestrator | 2026-04-06 06:18:56.742334 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-06 06:18:56.742339 | orchestrator | Monday 06 April 2026 06:18:34 +0000 (0:00:01.477) 0:00:20.267 ********** 2026-04-06 06:18:56.742344 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:18:56.742349 | orchestrator | 2026-04-06 06:18:56.742353 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-06 06:18:56.742358 | orchestrator | Monday 06 April 2026 06:18:39 +0000 (0:00:04.507) 0:00:24.775 ********** 2026-04-06 06:18:56.742363 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-06 06:18:56.742368 | orchestrator | 2026-04-06 06:18:56.742373 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-06 06:18:56.742380 | orchestrator | Monday 06 April 2026 06:18:41 +0000 (0:00:02.536) 0:00:27.311 ********** 2026-04-06 06:18:56.742385 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-06 06:18:56.742390 | orchestrator | 2026-04-06 06:18:56.742394 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-06 06:18:56.742399 | orchestrator | Monday 06 April 2026 06:18:43 +0000 (0:00:02.026) 0:00:29.338 ********** 2026-04-06 06:18:56.742403 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:18:56.742408 | orchestrator | 2026-04-06 06:18:56.742413 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-06 06:18:56.742417 | orchestrator | Monday 06 April 2026 06:18:45 +0000 (0:00:01.485) 0:00:30.823 ********** 2026-04-06 06:18:56.742422 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:18:56.742426 | orchestrator | 2026-04-06 06:18:56.742431 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-06 06:18:56.742435 | orchestrator | Monday 06 April 2026 06:18:46 +0000 (0:00:01.142) 0:00:31.966 ********** 2026-04-06 06:18:56.742440 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:18:56.742445 | orchestrator | 2026-04-06 06:18:56.742449 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 06:18:56.742454 | orchestrator | Monday 06 April 2026 06:18:47 +0000 (0:00:01.154) 0:00:33.121 ********** 2026-04-06 06:18:56.742458 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0 2026-04-06 06:18:56.742463 | orchestrator | 2026-04-06 06:18:56.742467 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-06 06:18:56.742472 | orchestrator | Monday 06 April 2026 06:18:49 +0000 (0:00:01.509) 0:00:34.630 ********** 2026-04-06 06:18:56.742515 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:18:56.742527 | orchestrator | 2026-04-06 06:18:56.742533 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-06 06:18:56.742538 | orchestrator | Monday 06 April 2026 06:18:53 +0000 (0:00:04.782) 0:00:39.412 ********** 2026-04-06 06:18:56.742552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 06:20:49.978920 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979022 | orchestrator | 2026-04-06 06:20:49.979036 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-06 06:20:49.979047 | orchestrator | Monday 06 April 2026 06:18:57 +0000 (0:00:04.029) 0:00:43.442 ********** 2026-04-06 06:20:49.979060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 06:20:49.979095 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979105 | orchestrator | 2026-04-06 06:20:49.979114 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-06 06:20:49.979123 | orchestrator | Monday 06 April 2026 06:19:01 +0000 (0:00:03.951) 0:00:47.393 ********** 2026-04-06 06:20:49.979131 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979140 | orchestrator | 2026-04-06 06:20:49.979148 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-06 06:20:49.979157 | orchestrator | Monday 06 April 2026 06:19:06 +0000 (0:00:04.364) 0:00:51.757 ********** 2026-04-06 06:20:49.979196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:20:49.979208 | orchestrator | 2026-04-06 06:20:49.979217 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-06 06:20:49.979234 | orchestrator | Monday 06 April 2026 06:19:11 +0000 (0:00:05.133) 0:00:56.891 ********** 2026-04-06 06:20:49.979243 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:20:49.979252 | orchestrator | 2026-04-06 06:20:49.979261 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-06 06:20:49.979270 | orchestrator | Monday 06 April 2026 06:19:17 +0000 (0:00:06.505) 0:01:03.396 ********** 2026-04-06 06:20:49.979278 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979287 | orchestrator | 2026-04-06 06:20:49.979295 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-06 06:20:49.979304 | orchestrator | Monday 06 April 2026 06:19:22 +0000 (0:00:04.235) 0:01:07.632 ********** 2026-04-06 06:20:49.979313 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979321 | orchestrator | 2026-04-06 06:20:49.979330 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-06 06:20:49.979338 | orchestrator | Monday 06 April 2026 06:19:26 +0000 (0:00:04.177) 0:01:11.809 ********** 2026-04-06 06:20:49.979347 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979356 | orchestrator | 2026-04-06 06:20:49.979364 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-06 06:20:49.979373 | orchestrator | Monday 06 April 2026 06:19:30 +0000 (0:00:04.206) 0:01:16.016 ********** 2026-04-06 06:20:49.979382 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979390 | orchestrator | 2026-04-06 06:20:49.979399 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-06 06:20:49.979407 | orchestrator | Monday 06 April 2026 06:19:31 +0000 (0:00:01.107) 0:01:17.124 ********** 2026-04-06 06:20:49.979416 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-06 06:20:49.979426 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979434 | orchestrator | 2026-04-06 06:20:49.979443 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-06 06:20:49.979452 | orchestrator | Monday 06 April 2026 06:19:35 +0000 (0:00:04.284) 0:01:21.409 ********** 2026-04-06 06:20:49.979461 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979472 | orchestrator | 2026-04-06 06:20:49.979482 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-06 06:20:49.979493 | orchestrator | Monday 06 April 2026 06:19:39 +0000 (0:00:04.128) 0:01:25.537 ********** 2026-04-06 06:20:49.979503 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979514 | orchestrator | 2026-04-06 06:20:49.979524 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 06:20:49.979561 | orchestrator | Monday 06 April 2026 06:19:44 +0000 (0:00:04.204) 0:01:29.742 ********** 2026-04-06 06:20:49.979572 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:20:49.979582 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:20:49.979593 | orchestrator | included: /ansible/roles/glance/tasks/stop_service.yml for testbed-node-0 2026-04-06 06:20:49.979603 | orchestrator | 2026-04-06 06:20:49.979614 | orchestrator | TASK [glance : Stop glance service] ******************************************** 2026-04-06 06:20:49.979624 | orchestrator | Monday 06 April 2026 06:19:46 +0000 (0:00:01.999) 0:01:31.742 ********** 2026-04-06 06:20:49.979633 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:20:49.979644 | orchestrator | 2026-04-06 06:20:49.979654 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-06 06:20:49.979664 | orchestrator | Monday 06 April 2026 06:19:59 +0000 (0:00:13.064) 0:01:44.806 ********** 2026-04-06 06:20:49.979675 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:20:49.979686 | orchestrator | 2026-04-06 06:20:49.979697 | orchestrator | TASK [glance : Running Glance database expand container] *********************** 2026-04-06 06:20:49.979706 | orchestrator | Monday 06 April 2026 06:20:02 +0000 (0:00:03.336) 0:01:48.143 ********** 2026-04-06 06:20:49.979714 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:20:49.979723 | orchestrator | 2026-04-06 06:20:49.979732 | orchestrator | TASK [glance : Running Glance database migrate container] ********************** 2026-04-06 06:20:49.979746 | orchestrator | Monday 06 April 2026 06:20:28 +0000 (0:00:25.862) 0:02:14.006 ********** 2026-04-06 06:20:49.979755 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:20:49.979763 | orchestrator | 2026-04-06 06:20:49.979772 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 06:20:49.979781 | orchestrator | Monday 06 April 2026 06:20:44 +0000 (0:00:16.353) 0:02:30.360 ********** 2026-04-06 06:20:49.979789 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:20:49.979798 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-1, testbed-node-2 2026-04-06 06:20:49.979807 | orchestrator | 2026-04-06 06:20:49.979816 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-06 06:20:49.979829 | orchestrator | Monday 06 April 2026 06:20:46 +0000 (0:00:01.473) 0:02:31.833 ********** 2026-04-06 06:20:49.979847 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:21:15.284739 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:21:15.284888 | orchestrator | 2026-04-06 06:21:15.284907 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 06:21:15.284921 | orchestrator | Monday 06 April 2026 06:20:51 +0000 (0:00:05.034) 0:02:36.867 ********** 2026-04-06 06:21:15.284932 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-1, testbed-node-2 2026-04-06 06:21:15.284944 | orchestrator | 2026-04-06 06:21:15.284956 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-06 06:21:15.284967 | orchestrator | Monday 06 April 2026 06:20:52 +0000 (0:00:01.253) 0:02:38.121 ********** 2026-04-06 06:21:15.284978 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:21:15.284990 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:21:15.285001 | orchestrator | 2026-04-06 06:21:15.285012 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-06 06:21:15.285037 | orchestrator | Monday 06 April 2026 06:20:57 +0000 (0:00:04.744) 0:02:42.866 ********** 2026-04-06 06:21:15.285049 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-06 06:21:15.285062 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-06 06:21:15.285073 | orchestrator | 2026-04-06 06:21:15.285083 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-06 06:21:15.285094 | orchestrator | Monday 06 April 2026 06:20:59 +0000 (0:00:02.356) 0:02:45.222 ********** 2026-04-06 06:21:15.285105 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-06 06:21:15.285116 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-06 06:21:15.285127 | orchestrator | 2026-04-06 06:21:15.285138 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-06 06:21:15.285149 | orchestrator | Monday 06 April 2026 06:21:01 +0000 (0:00:02.070) 0:02:47.293 ********** 2026-04-06 06:21:15.285160 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:21:15.285172 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:21:15.285183 | orchestrator | 2026-04-06 06:21:15.285194 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-06 06:21:15.285205 | orchestrator | Monday 06 April 2026 06:21:03 +0000 (0:00:01.796) 0:02:49.089 ********** 2026-04-06 06:21:15.285217 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:21:15.285228 | orchestrator | 2026-04-06 06:21:15.285239 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-06 06:21:15.285251 | orchestrator | Monday 06 April 2026 06:21:04 +0000 (0:00:01.217) 0:02:50.307 ********** 2026-04-06 06:21:15.285264 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:21:15.285276 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:21:15.285288 | orchestrator | 2026-04-06 06:21:15.285302 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 06:21:15.285315 | orchestrator | Monday 06 April 2026 06:21:05 +0000 (0:00:01.237) 0:02:51.545 ********** 2026-04-06 06:21:15.285328 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-1, testbed-node-2 2026-04-06 06:21:15.285341 | orchestrator | 2026-04-06 06:21:15.285371 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-06 06:21:15.285385 | orchestrator | Monday 06 April 2026 06:21:07 +0000 (0:00:01.216) 0:02:52.762 ********** 2026-04-06 06:21:15.285400 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:21:15.285431 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:21:15.285446 | orchestrator | 2026-04-06 06:21:15.285459 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-06 06:21:15.285473 | orchestrator | Monday 06 April 2026 06:21:12 +0000 (0:00:04.947) 0:02:57.709 ********** 2026-04-06 06:21:15.285498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 06:21:29.426271 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:21:29.426428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 06:21:29.426451 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:21:29.426463 | orchestrator | 2026-04-06 06:21:29.426475 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-06 06:21:29.426488 | orchestrator | Monday 06 April 2026 06:21:16 +0000 (0:00:04.518) 0:03:02.228 ********** 2026-04-06 06:21:29.426500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 06:21:29.426535 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:21:29.426605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 06:21:29.426622 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:21:29.426633 | orchestrator | 2026-04-06 06:21:29.426645 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-06 06:21:29.426656 | orchestrator | Monday 06 April 2026 06:21:20 +0000 (0:00:04.103) 0:03:06.332 ********** 2026-04-06 06:21:29.426667 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:21:29.426678 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:21:29.426689 | orchestrator | 2026-04-06 06:21:29.426700 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-06 06:21:29.426710 | orchestrator | Monday 06 April 2026 06:21:25 +0000 (0:00:04.550) 0:03:10.883 ********** 2026-04-06 06:21:29.426764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:21:29.426806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:22:14.185003 | orchestrator | 2026-04-06 06:22:14.185117 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-06 06:22:14.185134 | orchestrator | Monday 06 April 2026 06:21:30 +0000 (0:00:05.299) 0:03:16.182 ********** 2026-04-06 06:22:14.185147 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:22:14.185165 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:22:14.185184 | orchestrator | 2026-04-06 06:22:14.185202 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-06 06:22:14.185221 | orchestrator | Monday 06 April 2026 06:21:36 +0000 (0:00:06.122) 0:03:22.305 ********** 2026-04-06 06:22:14.185239 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:22:14.185260 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:22:14.185279 | orchestrator | 2026-04-06 06:22:14.185356 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-06 06:22:14.185371 | orchestrator | Monday 06 April 2026 06:21:40 +0000 (0:00:03.899) 0:03:26.204 ********** 2026-04-06 06:22:14.185382 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:22:14.185393 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:22:14.185404 | orchestrator | 2026-04-06 06:22:14.185415 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-06 06:22:14.185426 | orchestrator | Monday 06 April 2026 06:21:44 +0000 (0:00:03.838) 0:03:30.043 ********** 2026-04-06 06:22:14.185437 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:22:14.185448 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:22:14.185459 | orchestrator | 2026-04-06 06:22:14.185470 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-06 06:22:14.185483 | orchestrator | Monday 06 April 2026 06:21:48 +0000 (0:00:04.374) 0:03:34.417 ********** 2026-04-06 06:22:14.185496 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:22:14.185509 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:22:14.185521 | orchestrator | 2026-04-06 06:22:14.185533 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-06 06:22:14.185546 | orchestrator | Monday 06 April 2026 06:21:50 +0000 (0:00:01.242) 0:03:35.659 ********** 2026-04-06 06:22:14.185559 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-06 06:22:14.185649 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:22:14.185669 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-06 06:22:14.185690 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:22:14.185709 | orchestrator | 2026-04-06 06:22:14.185729 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-06 06:22:14.185748 | orchestrator | Monday 06 April 2026 06:21:54 +0000 (0:00:04.561) 0:03:40.221 ********** 2026-04-06 06:22:14.185769 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:22:14.185788 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:22:14.185809 | orchestrator | 2026-04-06 06:22:14.185826 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-06 06:22:14.185844 | orchestrator | Monday 06 April 2026 06:21:59 +0000 (0:00:04.604) 0:03:44.825 ********** 2026-04-06 06:22:14.185862 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:22:14.185880 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:22:14.185898 | orchestrator | 2026-04-06 06:22:14.185915 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-04-06 06:22:14.185933 | orchestrator | Monday 06 April 2026 06:22:04 +0000 (0:00:04.826) 0:03:49.651 ********** 2026-04-06 06:22:14.185977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:22:14.186120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:22:14.186141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-06 06:22:14.186154 | orchestrator | 2026-04-06 06:22:14.186174 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-04-06 06:22:14.186224 | orchestrator | Monday 06 April 2026 06:22:09 +0000 (0:00:05.451) 0:03:55.103 ********** 2026-04-06 06:22:14.186237 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:22:14.186255 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:22:14.186267 | orchestrator | } 2026-04-06 06:22:14.186278 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:22:14.186289 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:22:14.186300 | orchestrator | } 2026-04-06 06:22:14.186310 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:22:14.186321 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:22:14.186331 | orchestrator | } 2026-04-06 06:22:14.186342 | orchestrator | 2026-04-06 06:22:14.186353 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:22:14.186364 | orchestrator | Monday 06 April 2026 06:22:11 +0000 (0:00:01.533) 0:03:56.636 ********** 2026-04-06 06:22:14.186386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 06:23:18.723732 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:23:18.723859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 06:23:18.723904 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:23:18.723918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-06 06:23:18.723930 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:23:18.723941 | orchestrator | 2026-04-06 06:23:18.723952 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-06 06:23:18.723964 | orchestrator | Monday 06 April 2026 06:22:15 +0000 (0:00:04.564) 0:04:01.201 ********** 2026-04-06 06:23:18.723975 | orchestrator | 2026-04-06 06:23:18.723986 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-06 06:23:18.723997 | orchestrator | Monday 06 April 2026 06:22:16 +0000 (0:00:00.444) 0:04:01.645 ********** 2026-04-06 06:23:18.724007 | orchestrator | 2026-04-06 06:23:18.724018 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-06 06:23:18.724046 | orchestrator | Monday 06 April 2026 06:22:16 +0000 (0:00:00.467) 0:04:02.113 ********** 2026-04-06 06:23:18.724057 | orchestrator | 2026-04-06 06:23:18.724068 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-06 06:23:18.724078 | orchestrator | Monday 06 April 2026 06:22:17 +0000 (0:00:00.817) 0:04:02.931 ********** 2026-04-06 06:23:18.724089 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:23:18.724100 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:23:18.724111 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:23:18.724121 | orchestrator | 2026-04-06 06:23:18.724132 | orchestrator | TASK [glance : Running Glance database contract container] ********************* 2026-04-06 06:23:18.724143 | orchestrator | Monday 06 April 2026 06:22:56 +0000 (0:00:39.217) 0:04:42.149 ********** 2026-04-06 06:23:18.724153 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:23:18.724164 | orchestrator | 2026-04-06 06:23:18.724175 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-06 06:23:18.724195 | orchestrator | Monday 06 April 2026 06:23:11 +0000 (0:00:15.298) 0:04:57.448 ********** 2026-04-06 06:23:18.724208 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:23:18.724222 | orchestrator | 2026-04-06 06:23:18.724234 | orchestrator | TASK [glance : Finish Glance upgrade] ****************************************** 2026-04-06 06:23:18.724247 | orchestrator | Monday 06 April 2026 06:23:14 +0000 (0:00:03.043) 0:05:00.491 ********** 2026-04-06 06:23:18.724260 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:23:18.724274 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:23:18.724286 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:23:18.724298 | orchestrator | 2026-04-06 06:23:18.724310 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-06 06:23:18.724323 | orchestrator | Monday 06 April 2026 06:23:16 +0000 (0:00:01.440) 0:05:01.932 ********** 2026-04-06 06:23:18.724336 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:23:18.724349 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:23:18.724362 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:23:18.724374 | orchestrator | 2026-04-06 06:23:18.724386 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:23:18.724400 | orchestrator | testbed-node-0 : ok=27  changed=11  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-06 06:23:18.724415 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-06 06:23:18.724427 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-06 06:23:18.724437 | orchestrator | 2026-04-06 06:23:18.724448 | orchestrator | 2026-04-06 06:23:18.724459 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:23:18.724470 | orchestrator | Monday 06 April 2026 06:23:18 +0000 (0:00:01.921) 0:05:03.853 ********** 2026-04-06 06:23:18.724481 | orchestrator | =============================================================================== 2026-04-06 06:23:18.724491 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.22s 2026-04-06 06:23:18.724502 | orchestrator | glance : Running Glance database expand container ---------------------- 25.86s 2026-04-06 06:23:18.724513 | orchestrator | glance : Running Glance database migrate container --------------------- 16.35s 2026-04-06 06:23:18.724523 | orchestrator | glance : Running Glance database contract container -------------------- 15.30s 2026-04-06 06:23:18.724534 | orchestrator | glance : Stop glance service ------------------------------------------- 13.06s 2026-04-06 06:23:18.724545 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.51s 2026-04-06 06:23:18.724555 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.12s 2026-04-06 06:23:18.724566 | orchestrator | service-check-containers : glance | Check containers -------------------- 5.45s 2026-04-06 06:23:18.724577 | orchestrator | glance : Copying over config.json files for services -------------------- 5.30s 2026-04-06 06:23:18.724625 | orchestrator | glance : Copying over config.json files for services -------------------- 5.13s 2026-04-06 06:23:18.724638 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.03s 2026-04-06 06:23:18.724650 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.95s 2026-04-06 06:23:18.724660 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 4.83s 2026-04-06 06:23:18.724671 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.78s 2026-04-06 06:23:18.724682 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.74s 2026-04-06 06:23:18.724693 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.74s 2026-04-06 06:23:18.724704 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.60s 2026-04-06 06:23:18.724715 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.56s 2026-04-06 06:23:18.724733 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.56s 2026-04-06 06:23:18.724744 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.55s 2026-04-06 06:23:18.924968 | orchestrator | + osism apply -a upgrade cinder 2026-04-06 06:23:20.207078 | orchestrator | 2026-04-06 06:23:20 | INFO  | Prepare task for execution of cinder. 2026-04-06 06:23:20.272961 | orchestrator | 2026-04-06 06:23:20 | INFO  | Task 1e66036d-a097-4669-aa32-adbddd83106c (cinder) was prepared for execution. 2026-04-06 06:23:20.273069 | orchestrator | 2026-04-06 06:23:20 | INFO  | It takes a moment until task 1e66036d-a097-4669-aa32-adbddd83106c (cinder) has been started and output is visible here. 2026-04-06 06:23:42.138650 | orchestrator | 2026-04-06 06:23:42.138766 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:23:42.138786 | orchestrator | 2026-04-06 06:23:42.138807 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:23:42.138827 | orchestrator | Monday 06 April 2026 06:23:24 +0000 (0:00:01.411) 0:00:01.411 ********** 2026-04-06 06:23:42.138844 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:23:42.138864 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:23:42.138884 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:23:42.138902 | orchestrator | 2026-04-06 06:23:42.138921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:23:42.138934 | orchestrator | Monday 06 April 2026 06:23:26 +0000 (0:00:01.959) 0:00:03.371 ********** 2026-04-06 06:23:42.138946 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-06 06:23:42.138957 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-06 06:23:42.138968 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-06 06:23:42.138979 | orchestrator | 2026-04-06 06:23:42.138990 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-06 06:23:42.139001 | orchestrator | 2026-04-06 06:23:42.139012 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 06:23:42.139022 | orchestrator | Monday 06 April 2026 06:23:29 +0000 (0:00:02.315) 0:00:05.686 ********** 2026-04-06 06:23:42.139034 | orchestrator | included: /ansible/roles/cinder/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:23:42.139046 | orchestrator | 2026-04-06 06:23:42.139056 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 06:23:42.139067 | orchestrator | Monday 06 April 2026 06:23:31 +0000 (0:00:02.103) 0:00:07.789 ********** 2026-04-06 06:23:42.139078 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:23:42.139090 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:23:42.139100 | orchestrator | included: /ansible/roles/cinder/tasks/config.yml for testbed-node-0 2026-04-06 06:23:42.139111 | orchestrator | 2026-04-06 06:23:42.139122 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-06 06:23:42.139133 | orchestrator | Monday 06 April 2026 06:23:33 +0000 (0:00:01.874) 0:00:09.664 ********** 2026-04-06 06:23:42.139150 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:23:42.139233 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:23:42.139249 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:23:42.139280 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:23:42.139293 | orchestrator | 2026-04-06 06:23:42.139304 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 06:23:42.139315 | orchestrator | Monday 06 April 2026 06:23:36 +0000 (0:00:03.346) 0:00:13.010 ********** 2026-04-06 06:23:42.139326 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:23:42.139337 | orchestrator | 2026-04-06 06:23:42.139347 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 06:23:42.139359 | orchestrator | Monday 06 April 2026 06:23:37 +0000 (0:00:01.120) 0:00:14.131 ********** 2026-04-06 06:23:42.139370 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0 2026-04-06 06:23:42.139381 | orchestrator | 2026-04-06 06:23:42.139391 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-06 06:23:42.139402 | orchestrator | Monday 06 April 2026 06:23:39 +0000 (0:00:01.468) 0:00:15.599 ********** 2026-04-06 06:23:42.139413 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-06 06:23:42.139423 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-06 06:23:42.139434 | orchestrator | 2026-04-06 06:23:42.139445 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-06 06:23:42.139455 | orchestrator | Monday 06 April 2026 06:23:41 +0000 (0:00:02.595) 0:00:18.195 ********** 2026-04-06 06:23:42.139468 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-06 06:23:42.139490 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-06 06:23:42.139512 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-06 06:24:02.352864 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-06 06:24:02.352992 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-06 06:24:02.353046 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-06 06:24:02.353065 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-06 06:24:02.353097 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-06 06:24:02.353107 | orchestrator | 2026-04-06 06:24:02.353117 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-06 06:24:02.353126 | orchestrator | Monday 06 April 2026 06:23:47 +0000 (0:00:06.097) 0:00:24.293 ********** 2026-04-06 06:24:02.353134 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-06 06:24:02.353144 | orchestrator | 2026-04-06 06:24:02.353158 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-06 06:24:02.353171 | orchestrator | Monday 06 April 2026 06:23:50 +0000 (0:00:02.339) 0:00:26.633 ********** 2026-04-06 06:24:02.353185 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-06 06:24:02.353200 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-06 06:24:02.353215 | orchestrator | 2026-04-06 06:24:02.353229 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-06 06:24:02.353253 | orchestrator | Monday 06 April 2026 06:23:53 +0000 (0:00:03.480) 0:00:30.113 ********** 2026-04-06 06:24:02.353267 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-06 06:24:02.353281 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-06 06:24:02.353296 | orchestrator | 2026-04-06 06:24:02.353311 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-06 06:24:02.353326 | orchestrator | Monday 06 April 2026 06:23:55 +0000 (0:00:02.028) 0:00:32.141 ********** 2026-04-06 06:24:02.353339 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:24:02.353354 | orchestrator | 2026-04-06 06:24:02.353368 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-06 06:24:02.353381 | orchestrator | Monday 06 April 2026 06:23:56 +0000 (0:00:01.122) 0:00:33.264 ********** 2026-04-06 06:24:02.353395 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:24:02.353410 | orchestrator | 2026-04-06 06:24:02.353425 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 06:24:02.353440 | orchestrator | Monday 06 April 2026 06:23:57 +0000 (0:00:01.138) 0:00:34.403 ********** 2026-04-06 06:24:02.353455 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0 2026-04-06 06:24:02.353469 | orchestrator | 2026-04-06 06:24:02.353484 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-06 06:24:02.353497 | orchestrator | Monday 06 April 2026 06:23:59 +0000 (0:00:01.456) 0:00:35.860 ********** 2026-04-06 06:24:02.353514 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:24:02.353532 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:24:02.353558 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:24:09.177469 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:24:09.177580 | orchestrator | 2026-04-06 06:24:09.177597 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-06 06:24:09.177654 | orchestrator | Monday 06 April 2026 06:24:04 +0000 (0:00:04.834) 0:00:40.694 ********** 2026-04-06 06:24:09.177672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:24:09.177688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:24:09.177702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:24:09.177715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:24:09.177752 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:24:09.177765 | orchestrator | 2026-04-06 06:24:09.177793 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-06 06:24:09.177805 | orchestrator | Monday 06 April 2026 06:24:06 +0000 (0:00:01.807) 0:00:42.502 ********** 2026-04-06 06:24:09.177818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:24:09.177831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:24:09.177843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:24:09.177854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:24:09.177866 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:24:09.177877 | orchestrator | 2026-04-06 06:24:09.177888 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-06 06:24:09.177899 | orchestrator | Monday 06 April 2026 06:24:07 +0000 (0:00:01.683) 0:00:44.186 ********** 2026-04-06 06:24:09.177918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:24:36.863965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:24:36.864064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:24:36.864076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:24:36.864087 | orchestrator | 2026-04-06 06:24:36.864096 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-06 06:24:36.864105 | orchestrator | Monday 06 April 2026 06:24:13 +0000 (0:00:05.314) 0:00:49.500 ********** 2026-04-06 06:24:36.864113 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-06 06:24:36.864121 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:24:36.864129 | orchestrator | 2026-04-06 06:24:36.864137 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-06 06:24:36.864144 | orchestrator | Monday 06 April 2026 06:24:14 +0000 (0:00:01.516) 0:00:51.017 ********** 2026-04-06 06:24:36.864152 | orchestrator | included: service-uwsgi-config for testbed-node-0 2026-04-06 06:24:36.864159 | orchestrator | 2026-04-06 06:24:36.864167 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-06 06:24:36.864196 | orchestrator | Monday 06 April 2026 06:24:16 +0000 (0:00:01.900) 0:00:52.917 ********** 2026-04-06 06:24:36.864204 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:24:36.864211 | orchestrator | 2026-04-06 06:24:36.864218 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-06 06:24:36.864225 | orchestrator | Monday 06 April 2026 06:24:18 +0000 (0:00:02.527) 0:00:55.445 ********** 2026-04-06 06:24:36.864234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:24:36.864257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:24:36.864266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:24:36.864274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:24:36.864282 | orchestrator | 2026-04-06 06:24:36.864289 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-06 06:24:36.864296 | orchestrator | Monday 06 April 2026 06:24:31 +0000 (0:00:12.263) 0:01:07.709 ********** 2026-04-06 06:24:36.864304 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:24:36.864311 | orchestrator | 2026-04-06 06:24:36.864318 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-06 06:24:36.864331 | orchestrator | Monday 06 April 2026 06:24:33 +0000 (0:00:02.336) 0:01:10.046 ********** 2026-04-06 06:24:36.864338 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:24:36.864346 | orchestrator | 2026-04-06 06:24:36.864353 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-06 06:24:36.864360 | orchestrator | Monday 06 April 2026 06:24:36 +0000 (0:00:02.683) 0:01:12.729 ********** 2026-04-06 06:24:36.864368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:24:36.864382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:25:18.460441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:25:18.460529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:25:18.460540 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:25:18.460548 | orchestrator | 2026-04-06 06:25:18.460555 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-06 06:25:18.460563 | orchestrator | Monday 06 April 2026 06:24:37 +0000 (0:00:01.683) 0:01:14.413 ********** 2026-04-06 06:25:18.460569 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:25:18.460593 | orchestrator | 2026-04-06 06:25:18.460600 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-06 06:25:18.460607 | orchestrator | Monday 06 April 2026 06:24:39 +0000 (0:00:01.510) 0:01:15.924 ********** 2026-04-06 06:25:18.460613 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:25:18.460619 | orchestrator | 2026-04-06 06:25:18.460625 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-06 06:25:18.460666 | orchestrator | Monday 06 April 2026 06:25:16 +0000 (0:00:37.233) 0:01:53.157 ********** 2026-04-06 06:25:18.460676 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:25:18.460685 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:18.460706 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:25:18.460714 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:25:18.460727 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:18.460734 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:18.460741 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:18.460753 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:26.231093 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:26.231221 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:26.231284 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:26.231307 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:26.231329 | orchestrator | 2026-04-06 06:25:26.231351 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 06:25:26.231366 | orchestrator | Monday 06 April 2026 06:25:20 +0000 (0:00:03.385) 0:01:56.543 ********** 2026-04-06 06:25:26.231377 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:25:26.231389 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:25:26.231399 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:25:26.231410 | orchestrator | 2026-04-06 06:25:26.231421 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 06:25:26.231432 | orchestrator | Monday 06 April 2026 06:25:21 +0000 (0:00:01.390) 0:01:57.933 ********** 2026-04-06 06:25:26.231444 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:25:26.231455 | orchestrator | 2026-04-06 06:25:26.231465 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-06 06:25:26.231476 | orchestrator | Monday 06 April 2026 06:25:22 +0000 (0:00:01.523) 0:01:59.457 ********** 2026-04-06 06:25:26.231488 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-06 06:25:26.231498 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-06 06:25:26.231509 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-06 06:25:26.231520 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-06 06:25:26.231530 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-06 06:25:26.231541 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-06 06:25:26.231552 | orchestrator | 2026-04-06 06:25:26.231563 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-06 06:25:26.231592 | orchestrator | Monday 06 April 2026 06:25:25 +0000 (0:00:02.738) 0:02:02.196 ********** 2026-04-06 06:25:26.231608 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-06 06:25:26.231667 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-06 06:25:26.231691 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-06 06:25:26.231706 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-06 06:25:26.231731 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-06 06:25:27.566853 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-06 06:25:27.566959 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-06 06:25:27.566976 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-06 06:25:27.566989 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-06 06:25:27.567048 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-06 06:25:27.567063 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-06 06:25:27.567075 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-06 06:25:27.567087 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-06 06:25:27.567114 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-06 06:25:30.906368 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-06 06:25:30.906471 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-06 06:25:30.906488 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-06 06:25:30.906501 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-06 06:25:30.906559 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-06 06:25:30.906575 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-06 06:25:30.906586 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-06 06:25:30.906598 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-06 06:25:30.906618 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-06 06:25:30.906692 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-06 06:25:47.462804 | orchestrator | 2026-04-06 06:25:47.462921 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-06 06:25:47.462939 | orchestrator | Monday 06 April 2026 06:25:32 +0000 (0:00:06.305) 0:02:08.502 ********** 2026-04-06 06:25:47.462952 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-06 06:25:47.462965 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-06 06:25:47.462976 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-06 06:25:47.462987 | orchestrator | 2026-04-06 06:25:47.462998 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-06 06:25:47.463009 | orchestrator | Monday 06 April 2026 06:25:34 +0000 (0:00:02.700) 0:02:11.202 ********** 2026-04-06 06:25:47.463020 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-06 06:25:47.463031 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-06 06:25:47.463042 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-06 06:25:47.463054 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-06 06:25:47.463067 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-06 06:25:47.463078 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-06 06:25:47.463088 | orchestrator | 2026-04-06 06:25:47.463099 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-06 06:25:47.463137 | orchestrator | Monday 06 April 2026 06:25:38 +0000 (0:00:03.602) 0:02:14.804 ********** 2026-04-06 06:25:47.463148 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-06 06:25:47.463159 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-06 06:25:47.463170 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-06 06:25:47.463182 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-06 06:25:47.463193 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-06 06:25:47.463203 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-06 06:25:47.463214 | orchestrator | 2026-04-06 06:25:47.463225 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-06 06:25:47.463236 | orchestrator | Monday 06 April 2026 06:25:40 +0000 (0:00:02.097) 0:02:16.902 ********** 2026-04-06 06:25:47.463247 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:25:47.463259 | orchestrator | 2026-04-06 06:25:47.463269 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-06 06:25:47.463280 | orchestrator | Monday 06 April 2026 06:25:41 +0000 (0:00:01.155) 0:02:18.057 ********** 2026-04-06 06:25:47.463290 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:25:47.463301 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:25:47.463312 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:25:47.463322 | orchestrator | 2026-04-06 06:25:47.463333 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-06 06:25:47.463347 | orchestrator | Monday 06 April 2026 06:25:43 +0000 (0:00:01.573) 0:02:19.631 ********** 2026-04-06 06:25:47.463360 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:25:47.463372 | orchestrator | 2026-04-06 06:25:47.463384 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-06 06:25:47.463396 | orchestrator | Monday 06 April 2026 06:25:44 +0000 (0:00:01.299) 0:02:20.931 ********** 2026-04-06 06:25:47.463432 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:25:47.463453 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:25:47.463474 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:25:47.463488 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:47.463500 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:47.463512 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:47.463532 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:50.421119 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:50.421245 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:50.421260 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:50.421271 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:50.421282 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:25:50.421293 | orchestrator | 2026-04-06 06:25:50.421305 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-06 06:25:50.421316 | orchestrator | Monday 06 April 2026 06:25:49 +0000 (0:00:05.235) 0:02:26.167 ********** 2026-04-06 06:25:50.421347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:25:50.421369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:25:50.421381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:25:50.421393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:25:50.421403 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:25:50.421415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:25:50.421439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:25:52.079820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:25:52.079987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:25:52.080015 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:25:52.080042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:25:52.080063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:25:52.080076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:25:52.080135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:25:52.080147 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:25:52.080157 | orchestrator | 2026-04-06 06:25:52.080168 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-06 06:25:52.080180 | orchestrator | Monday 06 April 2026 06:25:51 +0000 (0:00:01.904) 0:02:28.071 ********** 2026-04-06 06:25:52.080197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:25:52.080216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:25:52.080234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:25:52.080263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:25:52.080298 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:25:52.080329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:25:54.989857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:25:54.989942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:25:54.989953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:25:54.989981 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:25:54.989992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:25:54.990001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:25:54.990062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:25:54.990071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:25:54.990077 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:25:54.990084 | orchestrator | 2026-04-06 06:25:54.990091 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-06 06:25:54.990098 | orchestrator | Monday 06 April 2026 06:25:53 +0000 (0:00:01.688) 0:02:29.759 ********** 2026-04-06 06:25:54.990105 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:25:54.990117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:25:54.990129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:26:08.540389 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:08.540474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:08.540500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:08.540508 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:08.540515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:08.540533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:08.540540 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:08.540546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:08.540557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:08.540563 | orchestrator | 2026-04-06 06:26:08.540570 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-06 06:26:08.540577 | orchestrator | Monday 06 April 2026 06:25:58 +0000 (0:00:05.435) 0:02:35.195 ********** 2026-04-06 06:26:08.540583 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-06 06:26:08.540589 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:26:08.540597 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-06 06:26:08.540602 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:26:08.540608 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-06 06:26:08.540613 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:26:08.540618 | orchestrator | 2026-04-06 06:26:08.540624 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-06 06:26:08.540630 | orchestrator | Monday 06 April 2026 06:26:00 +0000 (0:00:01.740) 0:02:36.935 ********** 2026-04-06 06:26:08.540635 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:26:08.540641 | orchestrator | 2026-04-06 06:26:08.540707 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-06 06:26:08.540714 | orchestrator | Monday 06 April 2026 06:26:02 +0000 (0:00:01.720) 0:02:38.655 ********** 2026-04-06 06:26:08.540719 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:26:08.540726 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:26:08.540731 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:26:08.540736 | orchestrator | 2026-04-06 06:26:08.540742 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-06 06:26:08.540747 | orchestrator | Monday 06 April 2026 06:26:05 +0000 (0:00:03.043) 0:02:41.699 ********** 2026-04-06 06:26:08.540759 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:26:16.802749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:26:16.802863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:26:16.802882 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:16.802896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:16.802908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:16.802964 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:16.802979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:16.802992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:16.803005 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:16.803017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:16.803035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:24.137224 | orchestrator | 2026-04-06 06:26:24.137337 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-06 06:26:24.137354 | orchestrator | Monday 06 April 2026 06:26:17 +0000 (0:00:12.657) 0:02:54.357 ********** 2026-04-06 06:26:24.137366 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:26:24.137379 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:26:24.137390 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:26:24.137401 | orchestrator | 2026-04-06 06:26:24.137412 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-06 06:26:24.137423 | orchestrator | Monday 06 April 2026 06:26:20 +0000 (0:00:02.824) 0:02:57.181 ********** 2026-04-06 06:26:24.137439 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:26:24.137458 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:26:24.137478 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:26:24.137496 | orchestrator | 2026-04-06 06:26:24.137514 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-06 06:26:24.137533 | orchestrator | Monday 06 April 2026 06:26:23 +0000 (0:00:02.773) 0:02:59.955 ********** 2026-04-06 06:26:24.137559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:26:24.137586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:26:24.137608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:26:24.137707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:26:24.137732 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:26:24.137803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:26:24.137832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:26:24.137855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:26:24.137876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:26:24.137908 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:26:24.137923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:26:24.137951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:26:30.241381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:26:30.241490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:26:30.241516 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:26:30.241539 | orchestrator | 2026-04-06 06:26:30.241559 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-06 06:26:30.241573 | orchestrator | Monday 06 April 2026 06:26:25 +0000 (0:00:01.793) 0:03:01.749 ********** 2026-04-06 06:26:30.241584 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:26:30.241595 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:26:30.241606 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:26:30.241617 | orchestrator | 2026-04-06 06:26:30.241628 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-04-06 06:26:30.241639 | orchestrator | Monday 06 April 2026 06:26:26 +0000 (0:00:01.695) 0:03:03.445 ********** 2026-04-06 06:26:30.241729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:26:30.241762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:26:30.241797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:26:30.241812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:30.241825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:30.241844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:30.241856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:30.241881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:34.123270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:34.123365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:34.123404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:34.123415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:26:34.123427 | orchestrator | 2026-04-06 06:26:34.123439 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-04-06 06:26:34.123450 | orchestrator | Monday 06 April 2026 06:26:32 +0000 (0:00:05.202) 0:03:08.648 ********** 2026-04-06 06:26:34.123462 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:26:34.123473 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:26:34.123483 | orchestrator | } 2026-04-06 06:26:34.123519 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:26:34.123529 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:26:34.123539 | orchestrator | } 2026-04-06 06:26:34.123549 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:26:34.123559 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:26:34.123568 | orchestrator | } 2026-04-06 06:26:34.123578 | orchestrator | 2026-04-06 06:26:34.123588 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:26:34.123598 | orchestrator | Monday 06 April 2026 06:26:33 +0000 (0:00:01.433) 0:03:10.082 ********** 2026-04-06 06:26:34.123642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:26:34.123746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:26:34.123770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:26:34.123783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:26:34.123795 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:26:34.123813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:26:34.123835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:28:56.912777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:28:56.912920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:28:56.912939 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:28:56.912958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:28:56.912973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:28:56.913001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-06 06:28:56.913033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-06 06:28:56.913053 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:28:56.913065 | orchestrator | 2026-04-06 06:28:56.913078 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-06 06:28:56.913090 | orchestrator | Monday 06 April 2026 06:26:35 +0000 (0:00:01.791) 0:03:11.873 ********** 2026-04-06 06:28:56.913101 | orchestrator | 2026-04-06 06:28:56.913111 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-06 06:28:56.913122 | orchestrator | Monday 06 April 2026 06:26:35 +0000 (0:00:00.479) 0:03:12.352 ********** 2026-04-06 06:28:56.913133 | orchestrator | 2026-04-06 06:28:56.913144 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-06 06:28:56.913155 | orchestrator | Monday 06 April 2026 06:26:36 +0000 (0:00:00.651) 0:03:13.004 ********** 2026-04-06 06:28:56.913165 | orchestrator | 2026-04-06 06:28:56.913176 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-06 06:28:56.913186 | orchestrator | Monday 06 April 2026 06:26:37 +0000 (0:00:00.818) 0:03:13.822 ********** 2026-04-06 06:28:56.913197 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:28:56.913208 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:28:56.913219 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:28:56.913229 | orchestrator | 2026-04-06 06:28:56.913240 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-06 06:28:56.913251 | orchestrator | Monday 06 April 2026 06:27:11 +0000 (0:00:34.594) 0:03:48.416 ********** 2026-04-06 06:28:56.913261 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:28:56.913272 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:28:56.913283 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:28:56.913293 | orchestrator | 2026-04-06 06:28:56.913305 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-06 06:28:56.913316 | orchestrator | Monday 06 April 2026 06:27:25 +0000 (0:00:13.446) 0:04:01.863 ********** 2026-04-06 06:28:56.913327 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:28:56.913337 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:28:56.913348 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:28:56.913359 | orchestrator | 2026-04-06 06:28:56.913369 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-06 06:28:56.913380 | orchestrator | Monday 06 April 2026 06:28:05 +0000 (0:00:39.959) 0:04:41.822 ********** 2026-04-06 06:28:56.913391 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:28:56.913401 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:28:56.913412 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:28:56.913423 | orchestrator | 2026-04-06 06:28:56.913433 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-06 06:28:56.913445 | orchestrator | Monday 06 April 2026 06:28:19 +0000 (0:00:13.930) 0:04:55.753 ********** 2026-04-06 06:28:56.913456 | orchestrator | Pausing for 30 seconds 2026-04-06 06:28:56.913467 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:28:56.913478 | orchestrator | 2026-04-06 06:28:56.913489 | orchestrator | TASK [cinder : Reload cinder services to remove RPC version pin] *************** 2026-04-06 06:28:56.913500 | orchestrator | Monday 06 April 2026 06:28:50 +0000 (0:00:31.530) 0:05:27.283 ********** 2026-04-06 06:28:56.913517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:28:56.913545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:29:34.872138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:29:34.872285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:34.872317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:34.872358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:34.872410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:34.872455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:34.872476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:34.872497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:34.872518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:34.872556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:34.872577 | orchestrator | 2026-04-06 06:29:34.872599 | orchestrator | TASK [cinder : Running Cinder online schema migration] ************************* 2026-04-06 06:29:34.872620 | orchestrator | Monday 06 April 2026 06:29:20 +0000 (0:00:29.293) 0:05:56.576 ********** 2026-04-06 06:29:34.872639 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:29:34.872659 | orchestrator | 2026-04-06 06:29:34.872680 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:29:34.872702 | orchestrator | testbed-node-0 : ok=44  changed=13  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 06:29:34.872757 | orchestrator | testbed-node-1 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-06 06:29:34.872777 | orchestrator | testbed-node-2 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-06 06:29:34.872797 | orchestrator | 2026-04-06 06:29:34.872816 | orchestrator | 2026-04-06 06:29:34.872836 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:29:34.872866 | orchestrator | Monday 06 April 2026 06:29:34 +0000 (0:00:14.744) 0:06:11.321 ********** 2026-04-06 06:29:35.301890 | orchestrator | =============================================================================== 2026-04-06 06:29:35.301985 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 39.96s 2026-04-06 06:29:35.301999 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 37.23s 2026-04-06 06:29:35.302011 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 34.59s 2026-04-06 06:29:35.302082 | orchestrator | cinder : Wait for cinder services to update service versions ----------- 31.53s 2026-04-06 06:29:35.302094 | orchestrator | cinder : Reload cinder services to remove RPC version pin -------------- 29.29s 2026-04-06 06:29:35.302106 | orchestrator | cinder : Running Cinder online schema migration ------------------------ 14.74s 2026-04-06 06:29:35.302117 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 13.93s 2026-04-06 06:29:35.302128 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 13.45s 2026-04-06 06:29:35.302138 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.66s 2026-04-06 06:29:35.302149 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.26s 2026-04-06 06:29:35.302161 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.31s 2026-04-06 06:29:35.302171 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.10s 2026-04-06 06:29:35.302182 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.43s 2026-04-06 06:29:35.302193 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.31s 2026-04-06 06:29:35.302204 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.24s 2026-04-06 06:29:35.302215 | orchestrator | service-check-containers : cinder | Check containers -------------------- 5.20s 2026-04-06 06:29:35.302252 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.83s 2026-04-06 06:29:35.302263 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.60s 2026-04-06 06:29:35.302274 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.48s 2026-04-06 06:29:35.302285 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.39s 2026-04-06 06:29:35.492819 | orchestrator | + osism apply -a upgrade barbican 2026-04-06 06:29:36.750821 | orchestrator | 2026-04-06 06:29:36 | INFO  | Prepare task for execution of barbican. 2026-04-06 06:29:36.815906 | orchestrator | 2026-04-06 06:29:36 | INFO  | Task 69e3a54e-9b42-43d0-88fc-84b5ef0a54c8 (barbican) was prepared for execution. 2026-04-06 06:29:36.816025 | orchestrator | 2026-04-06 06:29:36 | INFO  | It takes a moment until task 69e3a54e-9b42-43d0-88fc-84b5ef0a54c8 (barbican) has been started and output is visible here. 2026-04-06 06:29:51.259121 | orchestrator | 2026-04-06 06:29:51.259232 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:29:51.259248 | orchestrator | 2026-04-06 06:29:51.259260 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:29:51.259271 | orchestrator | Monday 06 April 2026 06:29:42 +0000 (0:00:02.298) 0:00:02.298 ********** 2026-04-06 06:29:51.259283 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:29:51.259295 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:29:51.259305 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:29:51.259316 | orchestrator | 2026-04-06 06:29:51.259327 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:29:51.259338 | orchestrator | Monday 06 April 2026 06:29:44 +0000 (0:00:01.763) 0:00:04.061 ********** 2026-04-06 06:29:51.259348 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-06 06:29:51.259359 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-06 06:29:51.259386 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-06 06:29:51.259398 | orchestrator | 2026-04-06 06:29:51.259409 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-06 06:29:51.259420 | orchestrator | 2026-04-06 06:29:51.259431 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-06 06:29:51.259442 | orchestrator | Monday 06 April 2026 06:29:46 +0000 (0:00:01.894) 0:00:05.955 ********** 2026-04-06 06:29:51.259453 | orchestrator | included: /ansible/roles/barbican/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:29:51.259464 | orchestrator | 2026-04-06 06:29:51.259475 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-06 06:29:51.259486 | orchestrator | Monday 06 April 2026 06:29:49 +0000 (0:00:03.183) 0:00:09.139 ********** 2026-04-06 06:29:51.259503 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:29:51.259520 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:29:51.259574 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:29:51.259595 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:51.259609 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:51.259620 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:51.259640 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:51.259653 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:29:51.259671 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:01.635770 | orchestrator | 2026-04-06 06:30:01.635942 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-06 06:30:01.635961 | orchestrator | Monday 06 April 2026 06:29:52 +0000 (0:00:03.145) 0:00:12.284 ********** 2026-04-06 06:30:01.635974 | orchestrator | ok: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-06 06:30:01.635999 | orchestrator | ok: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-06 06:30:01.636011 | orchestrator | ok: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-06 06:30:01.636023 | orchestrator | 2026-04-06 06:30:01.636035 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-06 06:30:01.636047 | orchestrator | Monday 06 April 2026 06:29:54 +0000 (0:00:01.914) 0:00:14.199 ********** 2026-04-06 06:30:01.636058 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:30:01.636071 | orchestrator | 2026-04-06 06:30:01.636101 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-06 06:30:01.636112 | orchestrator | Monday 06 April 2026 06:29:55 +0000 (0:00:01.196) 0:00:15.395 ********** 2026-04-06 06:30:01.636123 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:30:01.636134 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:30:01.636146 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:30:01.636157 | orchestrator | 2026-04-06 06:30:01.636168 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-06 06:30:01.636179 | orchestrator | Monday 06 April 2026 06:29:56 +0000 (0:00:01.478) 0:00:16.874 ********** 2026-04-06 06:30:01.636191 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:30:01.636202 | orchestrator | 2026-04-06 06:30:01.636213 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-06 06:30:01.636227 | orchestrator | Monday 06 April 2026 06:29:58 +0000 (0:00:01.752) 0:00:18.627 ********** 2026-04-06 06:30:01.636246 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:01.636299 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:01.636350 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:01.636383 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:01.636406 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:01.636444 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:01.636465 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:01.636487 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:01.636521 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:04.998683 | orchestrator | 2026-04-06 06:30:04.998831 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-06 06:30:04.998845 | orchestrator | Monday 06 April 2026 06:30:02 +0000 (0:00:04.033) 0:00:22.660 ********** 2026-04-06 06:30:04.998905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:04.998951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:04.998963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:04.998973 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:30:04.998985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:04.999015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:04.999031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:04.999047 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:30:04.999057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:04.999066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:04.999075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:04.999084 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:30:04.999093 | orchestrator | 2026-04-06 06:30:04.999102 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-06 06:30:04.999111 | orchestrator | Monday 06 April 2026 06:30:04 +0000 (0:00:01.927) 0:00:24.588 ********** 2026-04-06 06:30:04.999127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:07.821907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:07.822005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:07.822075 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:30:07.822093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:07.822107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:07.822120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:07.822132 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:30:07.822167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:07.822203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:07.822216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:07.822227 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:30:07.822239 | orchestrator | 2026-04-06 06:30:07.822251 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-06 06:30:07.822263 | orchestrator | Monday 06 April 2026 06:30:06 +0000 (0:00:01.679) 0:00:26.268 ********** 2026-04-06 06:30:07.822275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:07.822302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:19.756844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:19.756979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:19.756998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:19.757012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:19.757025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:19.757099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:19.757113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:19.757125 | orchestrator | 2026-04-06 06:30:19.757138 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-06 06:30:19.757151 | orchestrator | Monday 06 April 2026 06:30:10 +0000 (0:00:04.355) 0:00:30.623 ********** 2026-04-06 06:30:19.757163 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:30:19.757180 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:30:19.757199 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:30:19.757221 | orchestrator | 2026-04-06 06:30:19.757252 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-06 06:30:19.757269 | orchestrator | Monday 06 April 2026 06:30:13 +0000 (0:00:02.453) 0:00:33.077 ********** 2026-04-06 06:30:19.757286 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:30:19.757305 | orchestrator | 2026-04-06 06:30:19.757322 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-06 06:30:19.757341 | orchestrator | Monday 06 April 2026 06:30:15 +0000 (0:00:02.326) 0:00:35.403 ********** 2026-04-06 06:30:19.757361 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:30:19.757380 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:30:19.757401 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:30:19.757422 | orchestrator | 2026-04-06 06:30:19.757435 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-06 06:30:19.757448 | orchestrator | Monday 06 April 2026 06:30:17 +0000 (0:00:01.620) 0:00:37.024 ********** 2026-04-06 06:30:19.757464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:19.757499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:19.757528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:25.812609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:25.812718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:25.812795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:25.812838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:25.812867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:25.812879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:25.812891 | orchestrator | 2026-04-06 06:30:25.812905 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-06 06:30:25.812935 | orchestrator | Monday 06 April 2026 06:30:25 +0000 (0:00:07.998) 0:00:45.023 ********** 2026-04-06 06:30:25.812950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:25.812964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:25.812986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:25.812997 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:30:25.813015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:25.813035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:29.528168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:29.528269 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:30:29.528291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:29.528334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:29.528362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:29.528375 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:30:29.528386 | orchestrator | 2026-04-06 06:30:29.528398 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-04-06 06:30:29.528410 | orchestrator | Monday 06 April 2026 06:30:27 +0000 (0:00:02.315) 0:00:47.338 ********** 2026-04-06 06:30:29.528440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:29.528454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:29.528475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:30:29.528492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:29.528505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:29.528525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:33.798354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:33.798501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:33.798519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:30:33.798532 | orchestrator | 2026-04-06 06:30:33.798546 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-04-06 06:30:33.798559 | orchestrator | Monday 06 April 2026 06:30:31 +0000 (0:00:04.083) 0:00:51.422 ********** 2026-04-06 06:30:33.798571 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:30:33.798583 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:30:33.798594 | orchestrator | } 2026-04-06 06:30:33.798606 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:30:33.798616 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:30:33.798627 | orchestrator | } 2026-04-06 06:30:33.798638 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:30:33.798648 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:30:33.798659 | orchestrator | } 2026-04-06 06:30:33.798670 | orchestrator | 2026-04-06 06:30:33.798682 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:30:33.798693 | orchestrator | Monday 06 April 2026 06:30:32 +0000 (0:00:01.425) 0:00:52.848 ********** 2026-04-06 06:30:33.798790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:33.798841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:33.798877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:33.798896 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:30:33.798918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:30:33.798938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:30:33.798968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:30:33.798990 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:30:33.799026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:33:34.555434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:33:34.555547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:33:34.555562 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:33:34.555574 | orchestrator | 2026-04-06 06:33:34.555585 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-06 06:33:34.555597 | orchestrator | Monday 06 April 2026 06:30:35 +0000 (0:00:02.463) 0:00:55.311 ********** 2026-04-06 06:33:34.555607 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:33:34.555616 | orchestrator | 2026-04-06 06:33:34.555626 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-06 06:33:34.555636 | orchestrator | Monday 06 April 2026 06:30:48 +0000 (0:00:12.866) 0:01:08.178 ********** 2026-04-06 06:33:34.555645 | orchestrator | 2026-04-06 06:33:34.555655 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-06 06:33:34.555665 | orchestrator | Monday 06 April 2026 06:30:48 +0000 (0:00:00.430) 0:01:08.608 ********** 2026-04-06 06:33:34.555674 | orchestrator | 2026-04-06 06:33:34.555684 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-06 06:33:34.555693 | orchestrator | Monday 06 April 2026 06:30:49 +0000 (0:00:00.448) 0:01:09.057 ********** 2026-04-06 06:33:34.555703 | orchestrator | 2026-04-06 06:33:34.555713 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-06 06:33:34.555722 | orchestrator | Monday 06 April 2026 06:30:49 +0000 (0:00:00.808) 0:01:09.866 ********** 2026-04-06 06:33:34.555732 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:33:34.555741 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:33:34.555751 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:33:34.555760 | orchestrator | 2026-04-06 06:33:34.555826 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-06 06:33:34.555856 | orchestrator | Monday 06 April 2026 06:33:03 +0000 (0:02:13.740) 0:03:23.606 ********** 2026-04-06 06:33:34.555867 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:33:34.555877 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:33:34.555887 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:33:34.555896 | orchestrator | 2026-04-06 06:33:34.555906 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-06 06:33:34.555940 | orchestrator | Monday 06 April 2026 06:33:16 +0000 (0:00:12.504) 0:03:36.111 ********** 2026-04-06 06:33:34.555950 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:33:34.555960 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:33:34.555970 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:33:34.555981 | orchestrator | 2026-04-06 06:33:34.555993 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:33:34.556005 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 06:33:34.556019 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 06:33:34.556029 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 06:33:34.556040 | orchestrator | 2026-04-06 06:33:34.556051 | orchestrator | 2026-04-06 06:33:34.556063 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:33:34.556074 | orchestrator | Monday 06 April 2026 06:33:34 +0000 (0:00:17.938) 0:03:54.049 ********** 2026-04-06 06:33:34.556085 | orchestrator | =============================================================================== 2026-04-06 06:33:34.556097 | orchestrator | barbican : Restart barbican-api container ----------------------------- 133.74s 2026-04-06 06:33:34.556108 | orchestrator | barbican : Restart barbican-worker container --------------------------- 17.94s 2026-04-06 06:33:34.556119 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.87s 2026-04-06 06:33:34.556129 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.50s 2026-04-06 06:33:34.556138 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.00s 2026-04-06 06:33:34.556164 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.35s 2026-04-06 06:33:34.556174 | orchestrator | service-check-containers : barbican | Check containers ------------------ 4.08s 2026-04-06 06:33:34.556184 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.03s 2026-04-06 06:33:34.556193 | orchestrator | barbican : include_tasks ------------------------------------------------ 3.18s 2026-04-06 06:33:34.556202 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.15s 2026-04-06 06:33:34.556212 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.46s 2026-04-06 06:33:34.556221 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.45s 2026-04-06 06:33:34.556231 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.33s 2026-04-06 06:33:34.556240 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.31s 2026-04-06 06:33:34.556249 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.93s 2026-04-06 06:33:34.556260 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.91s 2026-04-06 06:33:34.556269 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.89s 2026-04-06 06:33:34.556279 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.76s 2026-04-06 06:33:34.556288 | orchestrator | barbican : include_tasks ------------------------------------------------ 1.75s 2026-04-06 06:33:34.556298 | orchestrator | barbican : Flush handlers ----------------------------------------------- 1.69s 2026-04-06 06:33:34.741639 | orchestrator | + osism apply -a upgrade designate 2026-04-06 06:33:36.026525 | orchestrator | 2026-04-06 06:33:36 | INFO  | Prepare task for execution of designate. 2026-04-06 06:33:36.094258 | orchestrator | 2026-04-06 06:33:36 | INFO  | Task 6a3f1545-a043-4c44-9582-7fde172bb7db (designate) was prepared for execution. 2026-04-06 06:33:36.094345 | orchestrator | 2026-04-06 06:33:36 | INFO  | It takes a moment until task 6a3f1545-a043-4c44-9582-7fde172bb7db (designate) has been started and output is visible here. 2026-04-06 06:33:50.993210 | orchestrator | 2026-04-06 06:33:50.993328 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:33:50.993356 | orchestrator | 2026-04-06 06:33:50.993374 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:33:50.993385 | orchestrator | Monday 06 April 2026 06:33:41 +0000 (0:00:01.710) 0:00:01.710 ********** 2026-04-06 06:33:50.993397 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:33:50.993409 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:33:50.993419 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:33:50.993430 | orchestrator | 2026-04-06 06:33:50.993441 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:33:50.993452 | orchestrator | Monday 06 April 2026 06:33:42 +0000 (0:00:01.777) 0:00:03.488 ********** 2026-04-06 06:33:50.993464 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-06 06:33:50.993475 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-06 06:33:50.993486 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-06 06:33:50.993497 | orchestrator | 2026-04-06 06:33:50.993509 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-06 06:33:50.993520 | orchestrator | 2026-04-06 06:33:50.993549 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-06 06:33:50.993560 | orchestrator | Monday 06 April 2026 06:33:45 +0000 (0:00:02.739) 0:00:06.227 ********** 2026-04-06 06:33:50.993572 | orchestrator | included: /ansible/roles/designate/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:33:50.993583 | orchestrator | 2026-04-06 06:33:50.993594 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-06 06:33:50.993605 | orchestrator | Monday 06 April 2026 06:33:48 +0000 (0:00:02.901) 0:00:09.129 ********** 2026-04-06 06:33:50.993620 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:33:50.993639 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:33:50.993671 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:33:50.993707 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:33:50.993725 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:33:50.993739 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:33:50.993753 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:50.993767 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:50.993828 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:50.993852 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:58.797640 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:58.797739 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:58.797751 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:58.797763 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:58.797853 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:58.797874 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:58.797922 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:58.797942 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:33:58.797953 | orchestrator | 2026-04-06 06:33:58.797963 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-06 06:33:58.797974 | orchestrator | Monday 06 April 2026 06:33:52 +0000 (0:00:04.355) 0:00:13.484 ********** 2026-04-06 06:33:58.797982 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:33:58.797992 | orchestrator | 2026-04-06 06:33:58.798001 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-06 06:33:58.798010 | orchestrator | Monday 06 April 2026 06:33:53 +0000 (0:00:01.123) 0:00:14.608 ********** 2026-04-06 06:33:58.798071 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:33:58.798080 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:33:58.798089 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:33:58.798097 | orchestrator | 2026-04-06 06:33:58.798106 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-06 06:33:58.798115 | orchestrator | Monday 06 April 2026 06:33:55 +0000 (0:00:01.470) 0:00:16.079 ********** 2026-04-06 06:33:58.798124 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:33:58.798133 | orchestrator | 2026-04-06 06:33:58.798187 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-06 06:33:58.798205 | orchestrator | Monday 06 April 2026 06:33:57 +0000 (0:00:01.858) 0:00:17.937 ********** 2026-04-06 06:33:58.798268 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:33:58.798294 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:33:58.798331 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:34:02.758475 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758623 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758690 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758713 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758734 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758775 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758903 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758920 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758943 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758955 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758971 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:02.758984 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:02.759013 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:05.059057 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:05.059199 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:05.059218 | orchestrator | 2026-04-06 06:34:05.059233 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-06 06:34:05.059245 | orchestrator | Monday 06 April 2026 06:34:04 +0000 (0:00:06.673) 0:00:24.611 ********** 2026-04-06 06:34:05.059260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:05.059277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:34:05.059305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:05.059340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:05.059363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:05.059376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:05.059388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:34:05.059405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:34:05.059425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.322620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.322721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.322733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.322743 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:34:07.322752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.322760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.322835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.322881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.322890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.322897 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:34:07.322905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.322912 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:34:07.322919 | orchestrator | 2026-04-06 06:34:07.322926 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-06 06:34:07.322935 | orchestrator | Monday 06 April 2026 06:34:06 +0000 (0:00:02.461) 0:00:27.073 ********** 2026-04-06 06:34:07.322943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:07.322958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:34:07.322977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.692278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:07.692370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.692382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:07.692409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:34:07.692439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.692464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:34:07.692474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.692484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.692494 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:34:07.692505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.692514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.692534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:07.692550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:12.299120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:34:12.299205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:12.299215 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:34:12.299225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:34:12.299232 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:34:12.299238 | orchestrator | 2026-04-06 06:34:12.299245 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-06 06:34:12.299253 | orchestrator | Monday 06 April 2026 06:34:08 +0000 (0:00:02.500) 0:00:29.574 ********** 2026-04-06 06:34:12.299274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:34:12.299312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:34:12.299321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:34:12.299329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:12.299337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:12.299352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:12.299359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:12.299372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.314870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.314991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.315009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.315021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.315074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.315087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.315120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.315133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.315145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.315156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:19.315176 | orchestrator | 2026-04-06 06:34:19.315189 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-06 06:34:19.315202 | orchestrator | Monday 06 April 2026 06:34:16 +0000 (0:00:07.147) 0:00:36.721 ********** 2026-04-06 06:34:19.315219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:34:19.315245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:34:28.780298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:34:28.780391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:28.780553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:41.833399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:41.833569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:41.833606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:41.833619 | orchestrator | 2026-04-06 06:34:41.833632 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-06 06:34:41.833644 | orchestrator | Monday 06 April 2026 06:34:32 +0000 (0:00:15.995) 0:00:52.718 ********** 2026-04-06 06:34:41.833655 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-06 06:34:41.833667 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-06 06:34:41.833677 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-06 06:34:41.833688 | orchestrator | 2026-04-06 06:34:41.833699 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-06 06:34:41.833710 | orchestrator | Monday 06 April 2026 06:34:36 +0000 (0:00:04.811) 0:00:57.529 ********** 2026-04-06 06:34:41.833720 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-06 06:34:41.833731 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-06 06:34:41.833742 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-06 06:34:41.833753 | orchestrator | 2026-04-06 06:34:41.833764 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-06 06:34:41.833774 | orchestrator | Monday 06 April 2026 06:34:40 +0000 (0:00:03.519) 0:01:01.049 ********** 2026-04-06 06:34:41.833787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:41.833906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:41.833923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:41.833945 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:41.833960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:41.833975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:41.833998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:44.919913 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:44.920024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:44.920059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:44.920074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:44.920085 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:44.920097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:44.920148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:44.920161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:44.920179 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:44.920191 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:44.920203 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:44.920214 | orchestrator | 2026-04-06 06:34:44.920227 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-06 06:34:44.920240 | orchestrator | Monday 06 April 2026 06:34:44 +0000 (0:00:03.837) 0:01:04.886 ********** 2026-04-06 06:34:44.920267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:46.009771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:46.009942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:46.009963 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:46.009978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:46.010075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:46.010112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:46.010125 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:46.010144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:46.010156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:46.010168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:46.010193 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:46.010215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:50.120216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:50.120340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:50.120375 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:50.120389 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:50.120464 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:50.120480 | orchestrator | 2026-04-06 06:34:50.120493 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-06 06:34:50.120506 | orchestrator | Monday 06 April 2026 06:34:48 +0000 (0:00:03.769) 0:01:08.656 ********** 2026-04-06 06:34:50.120517 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:34:50.120529 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:34:50.120540 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:34:50.120551 | orchestrator | 2026-04-06 06:34:50.120562 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-06 06:34:50.120573 | orchestrator | Monday 06 April 2026 06:34:49 +0000 (0:00:01.419) 0:01:10.075 ********** 2026-04-06 06:34:50.120605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:50.120622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:34:50.120641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:50.120654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:50.120674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:50.120686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:34:50.120698 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:34:50.120718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:53.412321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:34:53.412425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:53.412466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:53.412479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:53.412491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:34:53.412503 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:34:53.412537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:34:53.412561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:34:53.412581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:34:53.412593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:34:53.412604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:34:53.412615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:34:53.412627 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:34:53.412638 | orchestrator | 2026-04-06 06:34:53.412650 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-04-06 06:34:53.412662 | orchestrator | Monday 06 April 2026 06:34:51 +0000 (0:00:02.310) 0:01:12.386 ********** 2026-04-06 06:34:53.412681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:34:56.662551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:34:56.662690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:34:56.662710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:56.662725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:56.662737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-06 06:34:56.662778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:56.662791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:56.662852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:56.662864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:56.662875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:56.662886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-06 06:34:56.662910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:35:01.004531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:35:01.004658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-06 06:35:01.004685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:35:01.004709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:35:01.004731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:35:01.004752 | orchestrator | 2026-04-06 06:35:01.004774 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-04-06 06:35:01.004795 | orchestrator | Monday 06 April 2026 06:34:58 +0000 (0:00:07.037) 0:01:19.423 ********** 2026-04-06 06:35:01.004891 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:35:01.004945 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:35:01.004965 | orchestrator | } 2026-04-06 06:35:01.004977 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:35:01.004988 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:35:01.004999 | orchestrator | } 2026-04-06 06:35:01.005010 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:35:01.005021 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:35:01.005033 | orchestrator | } 2026-04-06 06:35:01.005047 | orchestrator | 2026-04-06 06:35:01.005060 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:35:01.005080 | orchestrator | Monday 06 April 2026 06:35:00 +0000 (0:00:01.592) 0:01:21.016 ********** 2026-04-06 06:35:01.005145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:35:01.005175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:35:01.005196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:35:01.005216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:35:01.005237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:35:01.005269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:35:01.005291 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:35:01.005331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:35:18.798319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:35:18.798439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:35:18.798459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:35:18.798537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:35:18.798568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:35:18.798581 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:35:18.798615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:35:18.798633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-06 06:35:18.798645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-06 06:35:18.798657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-06 06:35:18.798676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-06 06:35:18.798693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:35:18.798705 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:35:18.798717 | orchestrator | 2026-04-06 06:35:18.798729 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-06 06:35:18.798741 | orchestrator | Monday 06 April 2026 06:35:02 +0000 (0:00:02.198) 0:01:23.214 ********** 2026-04-06 06:35:18.798752 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:35:18.798764 | orchestrator | 2026-04-06 06:35:18.798774 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-06 06:35:18.798785 | orchestrator | Monday 06 April 2026 06:35:17 +0000 (0:00:15.096) 0:01:38.310 ********** 2026-04-06 06:35:18.798796 | orchestrator | 2026-04-06 06:35:18.798828 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-06 06:35:18.798840 | orchestrator | Monday 06 April 2026 06:35:18 +0000 (0:00:00.638) 0:01:38.949 ********** 2026-04-06 06:35:18.798850 | orchestrator | 2026-04-06 06:35:18.798864 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-06 06:35:18.798885 | orchestrator | Monday 06 April 2026 06:35:18 +0000 (0:00:00.448) 0:01:39.397 ********** 2026-04-06 06:37:46.755395 | orchestrator | 2026-04-06 06:37:46.755545 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-06 06:37:46.755565 | orchestrator | Monday 06 April 2026 06:35:19 +0000 (0:00:00.811) 0:01:40.209 ********** 2026-04-06 06:37:46.755577 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:37:46.755589 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:37:46.755600 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:37:46.755611 | orchestrator | 2026-04-06 06:37:46.755622 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-06 06:37:46.755633 | orchestrator | Monday 06 April 2026 06:35:34 +0000 (0:00:15.225) 0:01:55.435 ********** 2026-04-06 06:37:46.755644 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:37:46.755655 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:37:46.755666 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:37:46.755677 | orchestrator | 2026-04-06 06:37:46.755688 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-06 06:37:46.755699 | orchestrator | Monday 06 April 2026 06:35:48 +0000 (0:00:13.356) 0:02:08.792 ********** 2026-04-06 06:37:46.755735 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:37:46.755746 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:37:46.755757 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:37:46.755767 | orchestrator | 2026-04-06 06:37:46.755778 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-06 06:37:46.755789 | orchestrator | Monday 06 April 2026 06:36:01 +0000 (0:00:13.513) 0:02:22.305 ********** 2026-04-06 06:37:46.755800 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:37:46.755810 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:37:46.755821 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:37:46.755832 | orchestrator | 2026-04-06 06:37:46.755843 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-06 06:37:46.755909 | orchestrator | Monday 06 April 2026 06:37:04 +0000 (0:01:03.131) 0:03:25.437 ********** 2026-04-06 06:37:46.755920 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:37:46.755931 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:37:46.755942 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:37:46.755953 | orchestrator | 2026-04-06 06:37:46.755964 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-06 06:37:46.755974 | orchestrator | Monday 06 April 2026 06:37:18 +0000 (0:00:13.465) 0:03:38.902 ********** 2026-04-06 06:37:46.755985 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:37:46.755996 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:37:46.756006 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:37:46.756017 | orchestrator | 2026-04-06 06:37:46.756028 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-06 06:37:46.756038 | orchestrator | Monday 06 April 2026 06:37:37 +0000 (0:00:19.363) 0:03:58.266 ********** 2026-04-06 06:37:46.756049 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:37:46.756060 | orchestrator | 2026-04-06 06:37:46.756071 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:37:46.756083 | orchestrator | testbed-node-0 : ok=22  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-06 06:37:46.756095 | orchestrator | testbed-node-1 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 06:37:46.756106 | orchestrator | testbed-node-2 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 06:37:46.756117 | orchestrator | 2026-04-06 06:37:46.756128 | orchestrator | 2026-04-06 06:37:46.756139 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:37:46.756150 | orchestrator | Monday 06 April 2026 06:37:46 +0000 (0:00:08.786) 0:04:07.052 ********** 2026-04-06 06:37:46.756160 | orchestrator | =============================================================================== 2026-04-06 06:37:46.756171 | orchestrator | designate : Restart designate-producer container ----------------------- 63.13s 2026-04-06 06:37:46.756182 | orchestrator | designate : Restart designate-worker container ------------------------- 19.36s 2026-04-06 06:37:46.756193 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.00s 2026-04-06 06:37:46.756206 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 15.23s 2026-04-06 06:37:46.756224 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.10s 2026-04-06 06:37:46.756261 | orchestrator | designate : Restart designate-central container ------------------------ 13.51s 2026-04-06 06:37:46.756280 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.47s 2026-04-06 06:37:46.756298 | orchestrator | designate : Restart designate-api container ---------------------------- 13.36s 2026-04-06 06:37:46.756316 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.79s 2026-04-06 06:37:46.756334 | orchestrator | designate : Copying over config.json files for services ----------------- 7.15s 2026-04-06 06:37:46.756352 | orchestrator | service-check-containers : designate | Check containers ----------------- 7.04s 2026-04-06 06:37:46.756383 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.67s 2026-04-06 06:37:46.756402 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.81s 2026-04-06 06:37:46.756420 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.36s 2026-04-06 06:37:46.756439 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.84s 2026-04-06 06:37:46.756457 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.77s 2026-04-06 06:37:46.756476 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.52s 2026-04-06 06:37:46.756507 | orchestrator | designate : include_tasks ----------------------------------------------- 2.90s 2026-04-06 06:37:46.756518 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.74s 2026-04-06 06:37:46.756529 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 2.50s 2026-04-06 06:37:46.939765 | orchestrator | + osism apply -a upgrade ceilometer 2026-04-06 06:37:48.334646 | orchestrator | 2026-04-06 06:37:48 | INFO  | Prepare task for execution of ceilometer. 2026-04-06 06:37:48.399738 | orchestrator | 2026-04-06 06:37:48 | INFO  | Task 7c1534fb-42a7-4183-80f5-f9b82224cd64 (ceilometer) was prepared for execution. 2026-04-06 06:37:48.399839 | orchestrator | 2026-04-06 06:37:48 | INFO  | It takes a moment until task 7c1534fb-42a7-4183-80f5-f9b82224cd64 (ceilometer) has been started and output is visible here. 2026-04-06 06:38:00.737962 | orchestrator | 2026-04-06 06:38:00.738136 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:38:00.738155 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-06 06:38:00.738168 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-06 06:38:00.738190 | orchestrator | 2026-04-06 06:38:00.738201 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:38:00.738212 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-06 06:38:00.738223 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-06 06:38:00.738245 | orchestrator | Monday 06 April 2026 06:37:52 +0000 (0:00:01.095) 0:00:01.095 ********** 2026-04-06 06:38:00.738256 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:38:00.738268 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:38:00.738279 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:38:00.738289 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:38:00.738300 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:38:00.738311 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:38:00.738322 | orchestrator | 2026-04-06 06:38:00.738333 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:38:00.738343 | orchestrator | Monday 06 April 2026 06:37:54 +0000 (0:00:01.631) 0:00:02.727 ********** 2026-04-06 06:38:00.738354 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-06 06:38:00.738366 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-06 06:38:00.738376 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-06 06:38:00.738387 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-06 06:38:00.738398 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-06 06:38:00.738408 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-06 06:38:00.738419 | orchestrator | 2026-04-06 06:38:00.738429 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-06 06:38:00.738442 | orchestrator | 2026-04-06 06:38:00.738455 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-06 06:38:00.738494 | orchestrator | Monday 06 April 2026 06:37:55 +0000 (0:00:01.176) 0:00:03.903 ********** 2026-04-06 06:38:00.738508 | orchestrator | included: /ansible/roles/ceilometer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 06:38:00.738521 | orchestrator | 2026-04-06 06:38:00.738535 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-06 06:38:00.738547 | orchestrator | Monday 06 April 2026 06:37:57 +0000 (0:00:01.752) 0:00:05.656 ********** 2026-04-06 06:38:00.738564 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:00.738581 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:00.738595 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:00.738657 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:00.738720 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:00.738746 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:00.738765 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:00.738780 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:00.738794 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:00.738806 | orchestrator | 2026-04-06 06:38:00.738817 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-06 06:38:00.738829 | orchestrator | Monday 06 April 2026 06:37:59 +0000 (0:00:02.321) 0:00:07.977 ********** 2026-04-06 06:38:00.738847 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:38:05.849752 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 06:38:05.849925 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:38:05.849952 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 06:38:05.849965 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:38:05.849976 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:38:05.849987 | orchestrator | 2026-04-06 06:38:05.850000 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-06 06:38:05.850013 | orchestrator | Monday 06 April 2026 06:38:02 +0000 (0:00:02.679) 0:00:10.656 ********** 2026-04-06 06:38:05.850106 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:38:05.850119 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:38:05.850130 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:38:05.850141 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:38:05.850152 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:38:05.850163 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:38:05.850174 | orchestrator | 2026-04-06 06:38:05.850193 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-06 06:38:05.850267 | orchestrator | Monday 06 April 2026 06:38:03 +0000 (0:00:00.668) 0:00:11.325 ********** 2026-04-06 06:38:05.850289 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:05.850310 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:05.850324 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:05.850337 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:05.850351 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:05.850364 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:05.850377 | orchestrator | 2026-04-06 06:38:05.850391 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-06 06:38:05.850406 | orchestrator | Monday 06 April 2026 06:38:03 +0000 (0:00:00.874) 0:00:12.199 ********** 2026-04-06 06:38:05.850421 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:38:05.850435 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:38:05.850448 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:38:05.850461 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:38:05.850474 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:38:05.850487 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:38:05.850500 | orchestrator | 2026-04-06 06:38:05.850514 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-06 06:38:05.850527 | orchestrator | Monday 06 April 2026 06:38:04 +0000 (0:00:00.789) 0:00:12.989 ********** 2026-04-06 06:38:05.850545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:05.850578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:05.850592 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:05.850606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:05.850657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:05.850680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:05.850692 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:05.850703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:05.850715 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:05.850726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:05.850738 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:05.850754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:05.850766 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:05.850778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:05.850789 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:05.850801 | orchestrator | 2026-04-06 06:38:05.850818 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-06 06:38:05.850830 | orchestrator | Monday 06 April 2026 06:38:05 +0000 (0:00:00.895) 0:00:13.884 ********** 2026-04-06 06:38:05.850850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:12.622385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:12.622499 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:12.622518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:12.622532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:12.622544 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:12.622574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:12.622587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:12.622619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:12.622632 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:12.622662 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:12.622675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:12.622686 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:12.622698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:12.622709 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:12.622720 | orchestrator | 2026-04-06 06:38:12.622733 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-06 06:38:12.622746 | orchestrator | Monday 06 April 2026 06:38:06 +0000 (0:00:01.082) 0:00:14.966 ********** 2026-04-06 06:38:12.622758 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:38:12.622769 | orchestrator | 2026-04-06 06:38:12.622780 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-06 06:38:12.622792 | orchestrator | Monday 06 April 2026 06:38:07 +0000 (0:00:00.751) 0:00:15.717 ********** 2026-04-06 06:38:12.622804 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:38:12.622825 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:38:12.622851 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:38:12.622904 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:38:12.622924 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:38:12.622942 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:38:12.622961 | orchestrator | 2026-04-06 06:38:12.622981 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-06 06:38:12.623002 | orchestrator | Monday 06 April 2026 06:38:08 +0000 (0:00:00.651) 0:00:16.369 ********** 2026-04-06 06:38:12.623033 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:38:12.623050 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:38:12.623063 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:38:12.623077 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:38:12.623089 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:38:12.623102 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:38:12.623117 | orchestrator | 2026-04-06 06:38:12.623137 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-06 06:38:12.623157 | orchestrator | Monday 06 April 2026 06:38:09 +0000 (0:00:01.165) 0:00:17.535 ********** 2026-04-06 06:38:12.623176 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:12.623196 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:12.623215 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:12.623235 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:12.623249 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:12.623261 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:12.623272 | orchestrator | 2026-04-06 06:38:12.623283 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-06 06:38:12.623295 | orchestrator | Monday 06 April 2026 06:38:09 +0000 (0:00:00.645) 0:00:18.180 ********** 2026-04-06 06:38:12.623314 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:12.623333 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:12.623351 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:12.623369 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:12.623380 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:12.623391 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:12.623403 | orchestrator | 2026-04-06 06:38:12.623414 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-06 06:38:12.623425 | orchestrator | Monday 06 April 2026 06:38:10 +0000 (0:00:00.805) 0:00:18.986 ********** 2026-04-06 06:38:12.623436 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 06:38:12.623452 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:38:12.623471 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 06:38:12.623489 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:38:12.623507 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:38:12.623518 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:38:12.623529 | orchestrator | 2026-04-06 06:38:12.623540 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-06 06:38:12.623551 | orchestrator | Monday 06 April 2026 06:38:12 +0000 (0:00:01.638) 0:00:20.625 ********** 2026-04-06 06:38:12.623576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:16.086583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:16.086712 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:16.086742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:16.086812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:16.086834 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:16.086853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:16.086956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:16.086978 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:16.086997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:16.087035 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:16.087048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:16.087071 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:16.087083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:16.087094 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:16.087106 | orchestrator | 2026-04-06 06:38:16.087118 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-06 06:38:16.087138 | orchestrator | Monday 06 April 2026 06:38:13 +0000 (0:00:01.118) 0:00:21.743 ********** 2026-04-06 06:38:16.087150 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:16.087161 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:16.087171 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:16.087182 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:16.087193 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:16.087204 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:16.087215 | orchestrator | 2026-04-06 06:38:16.087226 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-06 06:38:16.087237 | orchestrator | Monday 06 April 2026 06:38:14 +0000 (0:00:00.744) 0:00:22.488 ********** 2026-04-06 06:38:16.087248 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 06:38:16.087259 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:38:16.087270 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 06:38:16.087281 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:38:16.087292 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:38:16.087303 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:38:16.087313 | orchestrator | 2026-04-06 06:38:16.087324 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-06 06:38:16.087335 | orchestrator | Monday 06 April 2026 06:38:15 +0000 (0:00:01.549) 0:00:24.037 ********** 2026-04-06 06:38:16.087347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:16.087359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:16.087377 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:16.087397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:21.492330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:21.492441 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:21.492473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:21.492486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:21.492498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:21.492510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:21.492539 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:21.492549 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:21.492560 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:21.492587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:21.492598 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:21.492608 | orchestrator | 2026-04-06 06:38:21.492619 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-06 06:38:21.492630 | orchestrator | Monday 06 April 2026 06:38:16 +0000 (0:00:01.042) 0:00:25.080 ********** 2026-04-06 06:38:21.492639 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:21.492649 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:21.492659 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:21.492668 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:21.492678 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:21.492687 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:21.492697 | orchestrator | 2026-04-06 06:38:21.492706 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-06 06:38:21.492716 | orchestrator | Monday 06 April 2026 06:38:17 +0000 (0:00:00.624) 0:00:25.705 ********** 2026-04-06 06:38:21.492725 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:21.492735 | orchestrator | 2026-04-06 06:38:21.492744 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-06 06:38:21.492754 | orchestrator | Monday 06 April 2026 06:38:17 +0000 (0:00:00.150) 0:00:25.855 ********** 2026-04-06 06:38:21.492764 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:21.492773 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:21.492782 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:21.492792 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:21.492801 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:21.492811 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:21.492821 | orchestrator | 2026-04-06 06:38:21.492835 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-06 06:38:21.492845 | orchestrator | Monday 06 April 2026 06:38:18 +0000 (0:00:00.797) 0:00:26.653 ********** 2026-04-06 06:38:21.492856 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 06:38:21.492930 | orchestrator | 2026-04-06 06:38:21.492942 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-06 06:38:21.492953 | orchestrator | Monday 06 April 2026 06:38:20 +0000 (0:00:01.690) 0:00:28.343 ********** 2026-04-06 06:38:21.492965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:21.492986 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:21.492999 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:21.493020 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:23.288748 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:23.288915 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:23.288936 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:23.288970 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:23.288982 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:23.288994 | orchestrator | 2026-04-06 06:38:23.289008 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-06 06:38:23.289020 | orchestrator | Monday 06 April 2026 06:38:22 +0000 (0:00:02.212) 0:00:30.556 ********** 2026-04-06 06:38:23.289050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:23.289063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:23.289074 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:23.289093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:23.289105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:23.289124 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:23.289135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:23.289146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:23.289158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:23.289169 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:23.289186 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:26.840143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:26.840255 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:26.840291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:26.840325 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:26.840337 | orchestrator | 2026-04-06 06:38:26.840350 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-06 06:38:26.840362 | orchestrator | Monday 06 April 2026 06:38:23 +0000 (0:00:01.354) 0:00:31.911 ********** 2026-04-06 06:38:26.840374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:26.840387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:26.840399 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:26.840410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:26.840441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:26.840453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:26.840472 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:26.840489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:26.840500 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:26.840512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:26.840523 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:26.840535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:26.840546 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:26.840557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:26.840569 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:26.840580 | orchestrator | 2026-04-06 06:38:26.840592 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-06 06:38:26.840603 | orchestrator | Monday 06 April 2026 06:38:25 +0000 (0:00:01.943) 0:00:33.855 ********** 2026-04-06 06:38:26.840674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:31.553648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:31.553795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:31.553841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:31.553917 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:31.553942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:31.553963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:31.554124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:31.554152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:31.554171 | orchestrator | 2026-04-06 06:38:31.554190 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-06 06:38:31.554209 | orchestrator | Monday 06 April 2026 06:38:28 +0000 (0:00:02.574) 0:00:36.429 ********** 2026-04-06 06:38:31.554226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:31.554246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:31.554264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:31.554296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:41.682205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:41.682299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:41.682311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:41.682320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:41.682328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:41.682341 | orchestrator | 2026-04-06 06:38:41.682355 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-06 06:38:41.682369 | orchestrator | Monday 06 April 2026 06:38:33 +0000 (0:00:05.493) 0:00:41.923 ********** 2026-04-06 06:38:41.682402 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 06:38:41.682415 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:38:41.682425 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 06:38:41.682437 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:38:41.682448 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:38:41.682460 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:38:41.682472 | orchestrator | 2026-04-06 06:38:41.682484 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-06 06:38:41.682495 | orchestrator | Monday 06 April 2026 06:38:35 +0000 (0:00:01.791) 0:00:43.715 ********** 2026-04-06 06:38:41.682507 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:41.682519 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:41.682530 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:41.682542 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:41.682553 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:41.682583 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:41.682597 | orchestrator | 2026-04-06 06:38:41.682608 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-06 06:38:41.682621 | orchestrator | Monday 06 April 2026 06:38:36 +0000 (0:00:00.580) 0:00:44.295 ********** 2026-04-06 06:38:41.682633 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:41.682645 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:41.682657 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:41.682669 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:38:41.682681 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:38:41.682703 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:38:41.682716 | orchestrator | 2026-04-06 06:38:41.682728 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-06 06:38:41.682742 | orchestrator | Monday 06 April 2026 06:38:37 +0000 (0:00:01.302) 0:00:45.598 ********** 2026-04-06 06:38:41.682755 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:41.682768 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:41.682782 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:41.682795 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:38:41.682809 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:38:41.682822 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:38:41.682836 | orchestrator | 2026-04-06 06:38:41.682849 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-06 06:38:41.682861 | orchestrator | Monday 06 April 2026 06:38:38 +0000 (0:00:01.391) 0:00:46.990 ********** 2026-04-06 06:38:41.682936 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:38:41.682951 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 06:38:41.682964 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 06:38:41.682978 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:38:41.682991 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:38:41.683004 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:38:41.683018 | orchestrator | 2026-04-06 06:38:41.683032 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-06 06:38:41.683045 | orchestrator | Monday 06 April 2026 06:38:40 +0000 (0:00:01.522) 0:00:48.512 ********** 2026-04-06 06:38:41.683060 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:41.683088 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:41.683103 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:41.683118 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:41.683149 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:43.444910 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:43.444980 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:43.445003 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:43.445007 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:43.445011 | orchestrator | 2026-04-06 06:38:43.445016 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-06 06:38:43.445022 | orchestrator | Monday 06 April 2026 06:38:42 +0000 (0:00:02.177) 0:00:50.689 ********** 2026-04-06 06:38:43.445026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:43.445041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:43.445046 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:43.445061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:43.445065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:43.445072 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:43.445077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:43.445080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:43.445084 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:43.445088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:43.445092 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:43.445101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:47.022663 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:47.023721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:47.023829 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:47.023851 | orchestrator | 2026-04-06 06:38:47.023899 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-06 06:38:47.023916 | orchestrator | Monday 06 April 2026 06:38:43 +0000 (0:00:01.258) 0:00:51.948 ********** 2026-04-06 06:38:47.023929 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:47.023943 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:47.023956 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:47.023970 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:47.023978 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:47.023986 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:47.023995 | orchestrator | 2026-04-06 06:38:47.024003 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-06 06:38:47.024011 | orchestrator | Monday 06 April 2026 06:38:44 +0000 (0:00:00.607) 0:00:52.556 ********** 2026-04-06 06:38:47.024021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:47.024032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:47.024041 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:47.024049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:47.024074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:47.024119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:47.024149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:47.024158 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:38:47.024166 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:38:47.024174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:47.024183 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:38:47.024191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:47.024200 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:38:47.024208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:47.024216 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:38:47.024224 | orchestrator | 2026-04-06 06:38:47.024232 | orchestrator | TASK [service-check-containers : ceilometer | Check containers] **************** 2026-04-06 06:38:47.024240 | orchestrator | Monday 06 April 2026 06:38:45 +0000 (0:00:01.549) 0:00:54.105 ********** 2026-04-06 06:38:47.024261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:49.370627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:49.370757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:49.370783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-06 06:38:49.370805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:49.370827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:49.370981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:49.371037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:49.371057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-06 06:38:49.371078 | orchestrator | 2026-04-06 06:38:49.371100 | orchestrator | TASK [service-check-containers : ceilometer | Notify handlers to restart containers] *** 2026-04-06 06:38:49.371121 | orchestrator | Monday 06 April 2026 06:38:48 +0000 (0:00:02.416) 0:00:56.522 ********** 2026-04-06 06:38:49.371143 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:38:49.371164 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:38:49.371183 | orchestrator | } 2026-04-06 06:38:49.371195 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:38:49.371207 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:38:49.371220 | orchestrator | } 2026-04-06 06:38:49.371232 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:38:49.371244 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:38:49.371259 | orchestrator | } 2026-04-06 06:38:49.371278 | orchestrator | changed: [testbed-node-3] => { 2026-04-06 06:38:49.371296 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:38:49.371314 | orchestrator | } 2026-04-06 06:38:49.371331 | orchestrator | changed: [testbed-node-4] => { 2026-04-06 06:38:49.371349 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:38:49.371368 | orchestrator | } 2026-04-06 06:38:49.371386 | orchestrator | changed: [testbed-node-5] => { 2026-04-06 06:38:49.371405 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:38:49.371423 | orchestrator | } 2026-04-06 06:38:49.371444 | orchestrator | 2026-04-06 06:38:49.371464 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:38:49.371481 | orchestrator | Monday 06 April 2026 06:38:48 +0000 (0:00:00.750) 0:00:57.272 ********** 2026-04-06 06:38:49.371494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:38:49.371519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:38:49.371530 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:38:49.371559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:39:40.944202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:39:40.944342 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:39:40.944371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-06 06:39:40.944392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:39:40.944409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:39:40.944458 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:39:40.944476 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:39:40.944493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:39:40.944527 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:39:40.944566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-06 06:39:40.944583 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:39:40.944600 | orchestrator | 2026-04-06 06:39:40.944618 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-06 06:39:40.944632 | orchestrator | Monday 06 April 2026 06:38:50 +0000 (0:00:01.997) 0:00:59.269 ********** 2026-04-06 06:39:40.944642 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:39:40.944652 | orchestrator | 2026-04-06 06:39:40.944662 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 06:39:40.944671 | orchestrator | Monday 06 April 2026 06:38:58 +0000 (0:00:07.849) 0:01:07.119 ********** 2026-04-06 06:39:40.944681 | orchestrator | 2026-04-06 06:39:40.944690 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 06:39:40.944700 | orchestrator | Monday 06 April 2026 06:38:58 +0000 (0:00:00.086) 0:01:07.206 ********** 2026-04-06 06:39:40.944710 | orchestrator | 2026-04-06 06:39:40.944721 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 06:39:40.944732 | orchestrator | Monday 06 April 2026 06:38:59 +0000 (0:00:00.092) 0:01:07.298 ********** 2026-04-06 06:39:40.944743 | orchestrator | 2026-04-06 06:39:40.944754 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 06:39:40.944797 | orchestrator | Monday 06 April 2026 06:38:59 +0000 (0:00:00.294) 0:01:07.593 ********** 2026-04-06 06:39:40.944814 | orchestrator | 2026-04-06 06:39:40.944826 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 06:39:40.944837 | orchestrator | Monday 06 April 2026 06:38:59 +0000 (0:00:00.076) 0:01:07.669 ********** 2026-04-06 06:39:40.944848 | orchestrator | 2026-04-06 06:39:40.944859 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-06 06:39:40.944871 | orchestrator | Monday 06 April 2026 06:38:59 +0000 (0:00:00.074) 0:01:07.743 ********** 2026-04-06 06:39:40.944882 | orchestrator | 2026-04-06 06:39:40.944894 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-06 06:39:40.944905 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-06 06:39:40.944927 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-06 06:39:40.944949 | orchestrator | Monday 06 April 2026 06:38:59 +0000 (0:00:00.075) 0:01:07.818 ********** 2026-04-06 06:39:40.944959 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:39:40.944969 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:39:40.944979 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:39:40.944988 | orchestrator | 2026-04-06 06:39:40.944998 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-06 06:39:40.945008 | orchestrator | Monday 06 April 2026 06:39:16 +0000 (0:00:16.727) 0:01:24.546 ********** 2026-04-06 06:39:40.945018 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:39:40.945028 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:39:40.945038 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:39:40.945047 | orchestrator | 2026-04-06 06:39:40.945057 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-06 06:39:40.945067 | orchestrator | Monday 06 April 2026 06:39:27 +0000 (0:00:11.344) 0:01:35.890 ********** 2026-04-06 06:39:40.945077 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:39:40.945087 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:39:40.945096 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:39:40.945106 | orchestrator | 2026-04-06 06:39:40.945117 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:39:40.945134 | orchestrator | testbed-node-0 : ok=26  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-06 06:39:40.945159 | orchestrator | testbed-node-1 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 06:39:40.945178 | orchestrator | testbed-node-2 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 06:39:40.945194 | orchestrator | testbed-node-3 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-06 06:39:40.945209 | orchestrator | testbed-node-4 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-06 06:39:40.945225 | orchestrator | testbed-node-5 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-06 06:39:40.945242 | orchestrator | 2026-04-06 06:39:40.945259 | orchestrator | 2026-04-06 06:39:40.945283 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:39:40.945294 | orchestrator | Monday 06 April 2026 06:39:40 +0000 (0:00:13.319) 0:01:49.210 ********** 2026-04-06 06:39:40.945304 | orchestrator | =============================================================================== 2026-04-06 06:39:40.945314 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 16.73s 2026-04-06 06:39:40.945324 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 13.32s 2026-04-06 06:39:40.945333 | orchestrator | ceilometer : Restart ceilometer-central container ---------------------- 11.34s 2026-04-06 06:39:40.945343 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 7.85s 2026-04-06 06:39:40.945353 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.49s 2026-04-06 06:39:40.945373 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 2.68s 2026-04-06 06:39:41.370867 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.57s 2026-04-06 06:39:41.370971 | orchestrator | service-check-containers : ceilometer | Check containers ---------------- 2.42s 2026-04-06 06:39:41.370987 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 2.32s 2026-04-06 06:39:41.371026 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.21s 2026-04-06 06:39:41.371037 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.18s 2026-04-06 06:39:41.371048 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.00s 2026-04-06 06:39:41.371059 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.94s 2026-04-06 06:39:41.371070 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.79s 2026-04-06 06:39:41.371080 | orchestrator | ceilometer : include_tasks ---------------------------------------------- 1.75s 2026-04-06 06:39:41.371091 | orchestrator | ceilometer : include_tasks ---------------------------------------------- 1.69s 2026-04-06 06:39:41.371102 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.64s 2026-04-06 06:39:41.371113 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.63s 2026-04-06 06:39:41.371124 | orchestrator | ceilometer : Copying over existing policy file -------------------------- 1.55s 2026-04-06 06:39:41.371135 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 1.55s 2026-04-06 06:39:41.559721 | orchestrator | + osism apply -a upgrade aodh 2026-04-06 06:39:42.865827 | orchestrator | 2026-04-06 06:39:42 | INFO  | Prepare task for execution of aodh. 2026-04-06 06:39:42.932959 | orchestrator | 2026-04-06 06:39:42 | INFO  | Task 4a5a2f72-3646-416b-9851-03d102d094ab (aodh) was prepared for execution. 2026-04-06 06:39:42.933048 | orchestrator | 2026-04-06 06:39:42 | INFO  | It takes a moment until task 4a5a2f72-3646-416b-9851-03d102d094ab (aodh) has been started and output is visible here. 2026-04-06 06:39:57.264412 | orchestrator | 2026-04-06 06:39:57.264521 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:39:57.264538 | orchestrator | 2026-04-06 06:39:57.264550 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:39:57.264562 | orchestrator | Monday 06 April 2026 06:39:47 +0000 (0:00:01.549) 0:00:01.549 ********** 2026-04-06 06:39:57.264574 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:39:57.264585 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:39:57.264596 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:39:57.264607 | orchestrator | 2026-04-06 06:39:57.264618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:39:57.264629 | orchestrator | Monday 06 April 2026 06:39:49 +0000 (0:00:01.758) 0:00:03.307 ********** 2026-04-06 06:39:57.264640 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-06 06:39:57.264651 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-06 06:39:57.264662 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-06 06:39:57.264672 | orchestrator | 2026-04-06 06:39:57.264683 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-06 06:39:57.264694 | orchestrator | 2026-04-06 06:39:57.264705 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-06 06:39:57.264715 | orchestrator | Monday 06 April 2026 06:39:51 +0000 (0:00:02.151) 0:00:05.459 ********** 2026-04-06 06:39:57.264726 | orchestrator | included: /ansible/roles/aodh/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:39:57.264797 | orchestrator | 2026-04-06 06:39:57.264810 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-06 06:39:57.264822 | orchestrator | Monday 06 April 2026 06:39:54 +0000 (0:00:03.272) 0:00:08.732 ********** 2026-04-06 06:39:57.264853 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:39:57.264895 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:39:57.264928 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:39:57.264944 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:39:57.264960 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:39:57.264978 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:39:57.265002 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:39:57.265027 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:39:57.265054 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:39:57.265085 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:02.173121 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:02.173222 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:02.173263 | orchestrator | 2026-04-06 06:40:02.173279 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-06 06:40:02.173291 | orchestrator | Monday 06 April 2026 06:39:58 +0000 (0:00:03.923) 0:00:12.656 ********** 2026-04-06 06:40:02.173303 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:40:02.173315 | orchestrator | 2026-04-06 06:40:02.173326 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-06 06:40:02.173338 | orchestrator | Monday 06 April 2026 06:40:00 +0000 (0:00:01.133) 0:00:13.789 ********** 2026-04-06 06:40:02.173349 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:40:02.173360 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:40:02.173386 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:40:02.173397 | orchestrator | 2026-04-06 06:40:02.173409 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-06 06:40:02.173420 | orchestrator | Monday 06 April 2026 06:40:01 +0000 (0:00:01.352) 0:00:15.141 ********** 2026-04-06 06:40:02.173433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:02.173450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:02.173463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:40:02.173493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:40:02.173514 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:40:02.173526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:02.173545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:02.173558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:40:02.173570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:40:02.173581 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:40:02.173601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:08.067072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:08.067167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:40:08.067198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:40:08.067204 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:40:08.067212 | orchestrator | 2026-04-06 06:40:08.067218 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-06 06:40:08.067224 | orchestrator | Monday 06 April 2026 06:40:03 +0000 (0:00:01.947) 0:00:17.089 ********** 2026-04-06 06:40:08.067229 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:40:08.067235 | orchestrator | 2026-04-06 06:40:08.067239 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-06 06:40:08.067244 | orchestrator | Monday 06 April 2026 06:40:05 +0000 (0:00:01.755) 0:00:18.844 ********** 2026-04-06 06:40:08.067250 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:08.067269 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:08.067281 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:08.067290 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:08.067295 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:08.067300 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:08.067305 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:08.067317 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:10.953193 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:10.953322 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:10.953339 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:10.953357 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:10.953373 | orchestrator | 2026-04-06 06:40:10.953387 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-06 06:40:10.953400 | orchestrator | Monday 06 April 2026 06:40:10 +0000 (0:00:04.942) 0:00:23.786 ********** 2026-04-06 06:40:10.953413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:10.953469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:10.953483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:10.953502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:40:10.953515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:10.953526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:10.953546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:40:10.953558 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:40:10.953579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:40:12.979866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:12.980000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:40:12.980020 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:40:12.980034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:40:12.980047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:40:12.980080 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:40:12.980092 | orchestrator | 2026-04-06 06:40:12.980104 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-06 06:40:12.980116 | orchestrator | Monday 06 April 2026 06:40:12 +0000 (0:00:02.233) 0:00:26.020 ********** 2026-04-06 06:40:12.980128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:12.980163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:12.980181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:12.980195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:40:12.980206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:12.980226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:12.980238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:40:12.980250 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:40:12.980269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:17.876853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:40:17.876948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:40:17.876960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:40:17.876988 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:40:17.876999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:40:17.877008 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:40:17.877016 | orchestrator | 2026-04-06 06:40:17.877025 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-06 06:40:17.877042 | orchestrator | Monday 06 April 2026 06:40:14 +0000 (0:00:02.035) 0:00:28.055 ********** 2026-04-06 06:40:17.877057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:17.877105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:17.877123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:17.877146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:17.877161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:17.877176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:17.877190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:17.877251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:27.225596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:27.225777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:27.225796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:27.225808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:27.225821 | orchestrator | 2026-04-06 06:40:27.225835 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-06 06:40:27.225847 | orchestrator | Monday 06 April 2026 06:40:20 +0000 (0:00:06.469) 0:00:34.525 ********** 2026-04-06 06:40:27.225861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:27.225910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:27.225934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:27.225947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:27.225960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:27.225971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:27.225988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:27.226009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:36.295887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:36.296015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:36.296038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:36.296058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:36.296076 | orchestrator | 2026-04-06 06:40:36.296096 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-06 06:40:36.296114 | orchestrator | Monday 06 April 2026 06:40:30 +0000 (0:00:09.690) 0:00:44.215 ********** 2026-04-06 06:40:36.296130 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:40:36.296147 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:40:36.296163 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:40:36.296180 | orchestrator | 2026-04-06 06:40:36.296198 | orchestrator | TASK [service-check-containers : aodh | Check containers] ********************** 2026-04-06 06:40:36.296216 | orchestrator | Monday 06 April 2026 06:40:33 +0000 (0:00:02.935) 0:00:47.150 ********** 2026-04-06 06:40:36.296256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:36.296337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:36.296359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:40:36.296379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:36.296399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:36.296426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-06 06:40:36.296464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:40.363123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:40.363203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:40.363213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:40.363219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:40.363225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-06 06:40:40.363251 | orchestrator | 2026-04-06 06:40:40.363259 | orchestrator | TASK [service-check-containers : aodh | Notify handlers to restart containers] *** 2026-04-06 06:40:40.363266 | orchestrator | Monday 06 April 2026 06:40:38 +0000 (0:00:04.900) 0:00:52.051 ********** 2026-04-06 06:40:40.363283 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:40:40.363290 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:40:40.363297 | orchestrator | } 2026-04-06 06:40:40.363303 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:40:40.363319 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:40:40.363325 | orchestrator | } 2026-04-06 06:40:40.363331 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:40:40.363344 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:40:40.363350 | orchestrator | } 2026-04-06 06:40:40.363356 | orchestrator | 2026-04-06 06:40:40.363362 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:40:40.363368 | orchestrator | Monday 06 April 2026 06:40:39 +0000 (0:00:01.562) 0:00:53.613 ********** 2026-04-06 06:40:40.363389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:40.363399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:40.363406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:40:40.363412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:40:40.363426 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:40:40.363436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:40:40.363443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:40:40.363455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:41:57.868533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:41:57.868698 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:41:57.868721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:41:57.868764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-06 06:41:57.868792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-06 06:41:57.868805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-06 06:41:57.868816 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:41:57.868828 | orchestrator | 2026-04-06 06:41:57.868840 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-06 06:41:57.868852 | orchestrator | Monday 06 April 2026 06:40:41 +0000 (0:00:02.069) 0:00:55.682 ********** 2026-04-06 06:41:57.868863 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:41:57.868874 | orchestrator | 2026-04-06 06:41:57.868885 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-06 06:41:57.868896 | orchestrator | Monday 06 April 2026 06:40:58 +0000 (0:00:16.648) 0:01:12.331 ********** 2026-04-06 06:41:57.868907 | orchestrator | 2026-04-06 06:41:57.868917 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-06 06:41:57.868928 | orchestrator | Monday 06 April 2026 06:40:59 +0000 (0:00:00.442) 0:01:12.774 ********** 2026-04-06 06:41:57.868939 | orchestrator | 2026-04-06 06:41:57.868968 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-06 06:41:57.868980 | orchestrator | Monday 06 April 2026 06:40:59 +0000 (0:00:00.445) 0:01:13.219 ********** 2026-04-06 06:41:57.868991 | orchestrator | 2026-04-06 06:41:57.869002 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-06 06:41:57.869013 | orchestrator | Monday 06 April 2026 06:41:00 +0000 (0:00:00.974) 0:01:14.194 ********** 2026-04-06 06:41:57.869024 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:41:57.869035 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:41:57.869046 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:41:57.869059 | orchestrator | 2026-04-06 06:41:57.869071 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-06 06:41:57.869084 | orchestrator | Monday 06 April 2026 06:41:13 +0000 (0:00:13.135) 0:01:27.329 ********** 2026-04-06 06:41:57.869096 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:41:57.869109 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:41:57.869121 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:41:57.869134 | orchestrator | 2026-04-06 06:41:57.869146 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-06 06:41:57.869168 | orchestrator | Monday 06 April 2026 06:41:26 +0000 (0:00:12.788) 0:01:40.117 ********** 2026-04-06 06:41:57.869187 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:41:57.869206 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:41:57.869220 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:41:57.869233 | orchestrator | 2026-04-06 06:41:57.869246 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-06 06:41:57.869259 | orchestrator | Monday 06 April 2026 06:41:39 +0000 (0:00:12.861) 0:01:52.979 ********** 2026-04-06 06:41:57.869270 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:41:57.869281 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:41:57.869292 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:41:57.869303 | orchestrator | 2026-04-06 06:41:57.869314 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:41:57.869326 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 06:41:57.869339 | orchestrator | testbed-node-1 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 06:41:57.869350 | orchestrator | testbed-node-2 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 06:41:57.869361 | orchestrator | 2026-04-06 06:41:57.869372 | orchestrator | 2026-04-06 06:41:57.869383 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:41:57.869394 | orchestrator | Monday 06 April 2026 06:41:57 +0000 (0:00:18.287) 0:02:11.266 ********** 2026-04-06 06:41:57.869405 | orchestrator | =============================================================================== 2026-04-06 06:41:57.869416 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 18.29s 2026-04-06 06:41:57.869427 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 16.65s 2026-04-06 06:41:57.869438 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 13.13s 2026-04-06 06:41:57.869449 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 12.86s 2026-04-06 06:41:57.869460 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 12.79s 2026-04-06 06:41:57.869471 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.69s 2026-04-06 06:41:57.869482 | orchestrator | aodh : Copying over config.json files for services ---------------------- 6.47s 2026-04-06 06:41:57.869492 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.94s 2026-04-06 06:41:57.869509 | orchestrator | service-check-containers : aodh | Check containers ---------------------- 4.90s 2026-04-06 06:41:57.869520 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 3.92s 2026-04-06 06:41:57.869531 | orchestrator | aodh : include_tasks ---------------------------------------------------- 3.27s 2026-04-06 06:41:57.869542 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 2.94s 2026-04-06 06:41:57.869553 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS certificate --- 2.23s 2026-04-06 06:41:57.869587 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.15s 2026-04-06 06:41:57.869598 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.07s 2026-04-06 06:41:57.869609 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 2.04s 2026-04-06 06:41:57.869620 | orchestrator | aodh : Copying over existing policy file -------------------------------- 1.95s 2026-04-06 06:41:57.869631 | orchestrator | aodh : Flush handlers --------------------------------------------------- 1.86s 2026-04-06 06:41:57.869647 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.76s 2026-04-06 06:41:57.869665 | orchestrator | aodh : include_tasks ---------------------------------------------------- 1.76s 2026-04-06 06:41:58.079351 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-06 06:41:58.124003 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 06:41:58.124093 | orchestrator | + osism apply -a bootstrap octavia 2026-04-06 06:41:59.557630 | orchestrator | 2026-04-06 06:41:59 | INFO  | Prepare task for execution of octavia. 2026-04-06 06:41:59.623383 | orchestrator | 2026-04-06 06:41:59 | INFO  | Task e401c7d2-3267-48ef-a6d0-1fbcaca149f2 (octavia) was prepared for execution. 2026-04-06 06:41:59.623450 | orchestrator | 2026-04-06 06:41:59 | INFO  | It takes a moment until task e401c7d2-3267-48ef-a6d0-1fbcaca149f2 (octavia) has been started and output is visible here. 2026-04-06 06:42:46.159991 | orchestrator | 2026-04-06 06:42:46.160111 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:42:46.160129 | orchestrator | 2026-04-06 06:42:46.160142 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:42:46.160154 | orchestrator | Monday 06 April 2026 06:42:04 +0000 (0:00:01.562) 0:00:01.562 ********** 2026-04-06 06:42:46.160165 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:42:46.160178 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:42:46.160189 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:42:46.160200 | orchestrator | 2026-04-06 06:42:46.160211 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:42:46.160222 | orchestrator | Monday 06 April 2026 06:42:06 +0000 (0:00:01.773) 0:00:03.335 ********** 2026-04-06 06:42:46.160233 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-06 06:42:46.160244 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-06 06:42:46.160255 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-06 06:42:46.160266 | orchestrator | 2026-04-06 06:42:46.160277 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-06 06:42:46.160288 | orchestrator | 2026-04-06 06:42:46.160299 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-06 06:42:46.160309 | orchestrator | Monday 06 April 2026 06:42:07 +0000 (0:00:01.522) 0:00:04.858 ********** 2026-04-06 06:42:46.160321 | orchestrator | included: /ansible/roles/octavia/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:42:46.160332 | orchestrator | 2026-04-06 06:42:46.160343 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-06 06:42:46.160354 | orchestrator | Monday 06 April 2026 06:42:11 +0000 (0:00:03.430) 0:00:08.288 ********** 2026-04-06 06:42:46.160365 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:42:46.160376 | orchestrator | 2026-04-06 06:42:46.160387 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-06 06:42:46.160397 | orchestrator | Monday 06 April 2026 06:42:14 +0000 (0:00:03.529) 0:00:11.818 ********** 2026-04-06 06:42:46.160408 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:42:46.160438 | orchestrator | 2026-04-06 06:42:46.160461 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-06 06:42:46.160473 | orchestrator | Monday 06 April 2026 06:42:17 +0000 (0:00:03.148) 0:00:14.966 ********** 2026-04-06 06:42:46.160484 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:42:46.160495 | orchestrator | 2026-04-06 06:42:46.160534 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-06 06:42:46.160548 | orchestrator | Monday 06 April 2026 06:42:21 +0000 (0:00:03.173) 0:00:18.140 ********** 2026-04-06 06:42:46.160561 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:42:46.160574 | orchestrator | 2026-04-06 06:42:46.160587 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-06 06:42:46.160599 | orchestrator | Monday 06 April 2026 06:42:24 +0000 (0:00:03.488) 0:00:21.629 ********** 2026-04-06 06:42:46.160613 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:42:46.160627 | orchestrator | 2026-04-06 06:42:46.160639 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:42:46.160652 | orchestrator | testbed-node-0 : ok=8  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 06:42:46.160693 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 06:42:46.160709 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 06:42:46.160721 | orchestrator | 2026-04-06 06:42:46.160733 | orchestrator | 2026-04-06 06:42:46.160746 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:42:46.160773 | orchestrator | Monday 06 April 2026 06:42:45 +0000 (0:00:21.187) 0:00:42.816 ********** 2026-04-06 06:42:46.160786 | orchestrator | =============================================================================== 2026-04-06 06:42:46.160799 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.19s 2026-04-06 06:42:46.160811 | orchestrator | octavia : Creating Octavia database ------------------------------------- 3.53s 2026-04-06 06:42:46.160824 | orchestrator | octavia : Creating Octavia persistence database user and setting permissions --- 3.49s 2026-04-06 06:42:46.160837 | orchestrator | octavia : include_tasks ------------------------------------------------- 3.43s 2026-04-06 06:42:46.160850 | orchestrator | octavia : Creating Octavia database user and setting permissions -------- 3.17s 2026-04-06 06:42:46.160862 | orchestrator | octavia : Creating Octavia persistence database ------------------------- 3.15s 2026-04-06 06:42:46.160875 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.77s 2026-04-06 06:42:46.160888 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.52s 2026-04-06 06:42:46.353596 | orchestrator | + osism apply -a upgrade octavia 2026-04-06 06:42:47.659695 | orchestrator | 2026-04-06 06:42:47 | INFO  | Prepare task for execution of octavia. 2026-04-06 06:42:47.735969 | orchestrator | 2026-04-06 06:42:47 | INFO  | Task 35851a69-3a44-4640-8cf6-cc6fd30dbbd8 (octavia) was prepared for execution. 2026-04-06 06:42:47.736061 | orchestrator | 2026-04-06 06:42:47 | INFO  | It takes a moment until task 35851a69-3a44-4640-8cf6-cc6fd30dbbd8 (octavia) has been started and output is visible here. 2026-04-06 06:43:26.617071 | orchestrator | 2026-04-06 06:43:26.617170 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:43:26.617183 | orchestrator | 2026-04-06 06:43:26.617192 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:43:26.617201 | orchestrator | Monday 06 April 2026 06:42:52 +0000 (0:00:01.533) 0:00:01.533 ********** 2026-04-06 06:43:26.617209 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:43:26.617218 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:43:26.617226 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:43:26.617234 | orchestrator | 2026-04-06 06:43:26.617243 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:43:26.617251 | orchestrator | Monday 06 April 2026 06:42:54 +0000 (0:00:01.799) 0:00:03.333 ********** 2026-04-06 06:43:26.617259 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-06 06:43:26.617267 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-06 06:43:26.617275 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-06 06:43:26.617283 | orchestrator | 2026-04-06 06:43:26.617291 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-06 06:43:26.617298 | orchestrator | 2026-04-06 06:43:26.617306 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-06 06:43:26.617314 | orchestrator | Monday 06 April 2026 06:42:56 +0000 (0:00:01.916) 0:00:05.250 ********** 2026-04-06 06:43:26.617323 | orchestrator | included: /ansible/roles/octavia/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:43:26.617332 | orchestrator | 2026-04-06 06:43:26.617340 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-06 06:43:26.617348 | orchestrator | Monday 06 April 2026 06:42:59 +0000 (0:00:03.062) 0:00:08.312 ********** 2026-04-06 06:43:26.617376 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:43:26.617385 | orchestrator | 2026-04-06 06:43:26.617393 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-06 06:43:26.617401 | orchestrator | Monday 06 April 2026 06:43:01 +0000 (0:00:02.021) 0:00:10.334 ********** 2026-04-06 06:43:26.617409 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:43:26.617418 | orchestrator | 2026-04-06 06:43:26.617426 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-06 06:43:26.617434 | orchestrator | Monday 06 April 2026 06:43:06 +0000 (0:00:05.092) 0:00:15.426 ********** 2026-04-06 06:43:26.617444 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:43:26.617553 | orchestrator | 2026-04-06 06:43:26.617565 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-06 06:43:26.617573 | orchestrator | Monday 06 April 2026 06:43:10 +0000 (0:00:04.131) 0:00:19.558 ********** 2026-04-06 06:43:26.617581 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-06 06:43:26.617590 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-06 06:43:26.617598 | orchestrator | 2026-04-06 06:43:26.617606 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-06 06:43:26.617614 | orchestrator | Monday 06 April 2026 06:43:18 +0000 (0:00:08.031) 0:00:27.589 ********** 2026-04-06 06:43:26.617624 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:43:26.617633 | orchestrator | 2026-04-06 06:43:26.617642 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-06 06:43:26.617651 | orchestrator | Monday 06 April 2026 06:43:23 +0000 (0:00:04.574) 0:00:32.164 ********** 2026-04-06 06:43:26.617660 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:43:26.617670 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:43:26.617679 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:43:26.617690 | orchestrator | 2026-04-06 06:43:26.617703 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-06 06:43:26.617718 | orchestrator | Monday 06 April 2026 06:43:24 +0000 (0:00:01.411) 0:00:33.576 ********** 2026-04-06 06:43:26.617751 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:43:26.617786 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:43:26.617799 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:43:26.617820 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:43:26.617831 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:26.617846 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:43:26.617857 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:43:26.617874 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:31.371998 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:31.372100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:31.372117 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:43:31.372130 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:31.372159 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:31.372172 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:43:31.372226 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:43:31.372240 | orchestrator | 2026-04-06 06:43:31.372253 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-06 06:43:31.372266 | orchestrator | Monday 06 April 2026 06:43:28 +0000 (0:00:03.767) 0:00:37.343 ********** 2026-04-06 06:43:31.372277 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:43:31.372290 | orchestrator | 2026-04-06 06:43:31.372301 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-06 06:43:31.372312 | orchestrator | Monday 06 April 2026 06:43:29 +0000 (0:00:01.127) 0:00:38.471 ********** 2026-04-06 06:43:31.372323 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:43:31.372333 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:43:31.372344 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:43:31.372355 | orchestrator | 2026-04-06 06:43:31.372366 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-06 06:43:31.372376 | orchestrator | Monday 06 April 2026 06:43:30 +0000 (0:00:01.360) 0:00:39.831 ********** 2026-04-06 06:43:31.372389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:43:31.372405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:43:31.372423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:43:31.372436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:43:31.372491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:43:36.013007 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:43:36.013120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:43:36.013143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:43:36.013157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:43:36.013186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:43:36.013221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:43:36.013234 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:43:36.013265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:43:36.013278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:43:36.013290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:43:36.013306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:43:36.013318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:43:36.013337 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:43:36.013348 | orchestrator | 2026-04-06 06:43:36.013360 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-06 06:43:36.013373 | orchestrator | Monday 06 April 2026 06:43:32 +0000 (0:00:01.728) 0:00:41.559 ********** 2026-04-06 06:43:36.013384 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:43:36.013395 | orchestrator | 2026-04-06 06:43:36.013406 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-06 06:43:36.013417 | orchestrator | Monday 06 April 2026 06:43:34 +0000 (0:00:01.829) 0:00:43.389 ********** 2026-04-06 06:43:36.013437 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:43:39.326115 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:43:39.326222 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:43:39.326256 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:43:39.326293 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:43:39.326305 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:43:39.326334 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:39.326348 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:39.326360 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:39.326378 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:39.326399 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:39.326411 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:39.326423 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:43:39.326501 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:43:41.214641 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:43:41.215627 | orchestrator | 2026-04-06 06:43:41.215682 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-06 06:43:41.215695 | orchestrator | Monday 06 April 2026 06:43:40 +0000 (0:00:06.052) 0:00:49.442 ********** 2026-04-06 06:43:41.215728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:43:41.215768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:43:41.215782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:43:41.215795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:43:41.215832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:43:41.215845 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:43:41.215857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:43:41.215882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:43:41.215895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:43:41.215906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:43:41.215932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:43:41.215943 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:43:41.215963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:43:42.831600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:43:42.831741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:43:42.831761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:43:42.831774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:43:42.831787 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:43:42.831801 | orchestrator | 2026-04-06 06:43:42.831813 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-06 06:43:42.831825 | orchestrator | Monday 06 April 2026 06:43:42 +0000 (0:00:01.737) 0:00:51.180 ********** 2026-04-06 06:43:42.831837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:43:42.831870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:43:42.831891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:43:42.831908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:43:42.831920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:43:42.831931 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:43:42.831943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:43:42.831955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:43:42.831974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:43:46.666111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:43:46.666260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:43:46.666286 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:43:46.666306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:43:46.666327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:43:46.666341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:43:46.666417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:43:46.666496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:43:46.666513 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:43:46.666528 | orchestrator | 2026-04-06 06:43:46.666549 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-06 06:43:46.666564 | orchestrator | Monday 06 April 2026 06:43:44 +0000 (0:00:01.831) 0:00:53.011 ********** 2026-04-06 06:43:46.666577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:43:46.666595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:43:46.666610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:43:46.666653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:43:57.147240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:43:57.147347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:43:57.147363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:57.147376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:57.147388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:57.147482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:57.147516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:57.147535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:43:57.147547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:43:57.147560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:43:57.147572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:43:57.147591 | orchestrator | 2026-04-06 06:43:57.147604 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-06 06:43:57.147617 | orchestrator | Monday 06 April 2026 06:43:50 +0000 (0:00:06.440) 0:00:59.452 ********** 2026-04-06 06:43:57.147628 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-06 06:43:57.147640 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-06 06:43:57.147651 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-06 06:43:57.147662 | orchestrator | 2026-04-06 06:43:57.147673 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-06 06:43:57.147684 | orchestrator | Monday 06 April 2026 06:43:53 +0000 (0:00:02.730) 0:01:02.182 ********** 2026-04-06 06:43:57.147703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:44:10.780907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:44:10.781026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:44:10.781066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:44:10.781081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:44:10.781092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:44:10.781122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:10.781142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:10.781154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:10.781167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:10.781186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:10.781198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:10.781209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:44:10.781233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:44:35.868054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:44:35.868168 | orchestrator | 2026-04-06 06:44:35.868187 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-06 06:44:35.868200 | orchestrator | Monday 06 April 2026 06:44:11 +0000 (0:00:18.578) 0:01:20.761 ********** 2026-04-06 06:44:35.868212 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:44:35.868223 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:44:35.868258 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:44:35.868270 | orchestrator | 2026-04-06 06:44:35.868281 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-06 06:44:35.868292 | orchestrator | Monday 06 April 2026 06:44:14 +0000 (0:00:02.790) 0:01:23.551 ********** 2026-04-06 06:44:35.868303 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-06 06:44:35.868314 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-06 06:44:35.868325 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-06 06:44:35.868336 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-06 06:44:35.868346 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-06 06:44:35.868357 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-06 06:44:35.868367 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-06 06:44:35.868378 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-06 06:44:35.868438 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-06 06:44:35.868449 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-06 06:44:35.868460 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-06 06:44:35.868471 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-06 06:44:35.868482 | orchestrator | 2026-04-06 06:44:35.868492 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-06 06:44:35.868504 | orchestrator | Monday 06 April 2026 06:44:20 +0000 (0:00:05.959) 0:01:29.510 ********** 2026-04-06 06:44:35.868515 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-06 06:44:35.868526 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-06 06:44:35.868537 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-06 06:44:35.868548 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-06 06:44:35.868565 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-06 06:44:35.868584 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-06 06:44:35.868602 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-06 06:44:35.868620 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-06 06:44:35.868639 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-06 06:44:35.868657 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-06 06:44:35.868676 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-06 06:44:35.868694 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-06 06:44:35.868712 | orchestrator | 2026-04-06 06:44:35.868732 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-06 06:44:35.868752 | orchestrator | Monday 06 April 2026 06:44:26 +0000 (0:00:06.082) 0:01:35.593 ********** 2026-04-06 06:44:35.868771 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-06 06:44:35.868788 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-06 06:44:35.868808 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-06 06:44:35.868827 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-06 06:44:35.868848 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-06 06:44:35.868866 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-06 06:44:35.868885 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-06 06:44:35.868900 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-06 06:44:35.868913 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-06 06:44:35.868926 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-06 06:44:35.868938 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-06 06:44:35.868951 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-06 06:44:35.868975 | orchestrator | 2026-04-06 06:44:35.868986 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-04-06 06:44:35.868997 | orchestrator | Monday 06 April 2026 06:44:33 +0000 (0:00:06.456) 0:01:42.050 ********** 2026-04-06 06:44:35.869044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:44:35.869064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:44:35.869077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-06 06:44:35.869089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:44:35.869102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:44:35.869134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-06 06:44:41.358358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:41.359175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:41.359214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:41.359228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:41.359241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:41.359281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-06 06:44:41.359306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:44:41.359314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:44:41.359320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-06 06:44:41.359327 | orchestrator | 2026-04-06 06:44:41.359334 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-04-06 06:44:41.359341 | orchestrator | Monday 06 April 2026 06:44:39 +0000 (0:00:06.213) 0:01:48.263 ********** 2026-04-06 06:44:41.359350 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:44:41.359360 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:44:41.359369 | orchestrator | } 2026-04-06 06:44:41.359406 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:44:41.359416 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:44:41.359425 | orchestrator | } 2026-04-06 06:44:41.359434 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:44:41.359444 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:44:41.359452 | orchestrator | } 2026-04-06 06:44:41.359461 | orchestrator | 2026-04-06 06:44:41.359470 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:44:41.359479 | orchestrator | Monday 06 April 2026 06:44:40 +0000 (0:00:01.415) 0:01:49.678 ********** 2026-04-06 06:44:41.359490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:44:41.359515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:44:41.359535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:44:41.583179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:44:41.583269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:44:41.583279 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:44:41.583288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:44:41.583321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:44:41.583341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:44:41.583363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:44:41.583370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:44:41.583418 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:44:41.583427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-06 06:44:41.583441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-06 06:44:41.583448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-06 06:44:41.583458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-06 06:44:41.583470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-06 06:46:11.206428 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:46:11.206593 | orchestrator | 2026-04-06 06:46:11.206612 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-06 06:46:11.206626 | orchestrator | Monday 06 April 2026 06:44:43 +0000 (0:00:02.353) 0:01:52.032 ********** 2026-04-06 06:46:11.206637 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:46:11.206648 | orchestrator | 2026-04-06 06:46:11.206660 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-06 06:46:11.206671 | orchestrator | Monday 06 April 2026 06:44:56 +0000 (0:00:13.125) 0:02:05.158 ********** 2026-04-06 06:46:11.206682 | orchestrator | 2026-04-06 06:46:11.206693 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-06 06:46:11.206704 | orchestrator | Monday 06 April 2026 06:44:56 +0000 (0:00:00.476) 0:02:05.634 ********** 2026-04-06 06:46:11.206714 | orchestrator | 2026-04-06 06:46:11.206725 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-06 06:46:11.206736 | orchestrator | Monday 06 April 2026 06:44:57 +0000 (0:00:00.469) 0:02:06.103 ********** 2026-04-06 06:46:11.206747 | orchestrator | 2026-04-06 06:46:11.206757 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-06 06:46:11.206768 | orchestrator | Monday 06 April 2026 06:44:58 +0000 (0:00:00.800) 0:02:06.904 ********** 2026-04-06 06:46:11.206779 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:46:11.206790 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:46:11.206825 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:46:11.206836 | orchestrator | 2026-04-06 06:46:11.206847 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-06 06:46:11.206858 | orchestrator | Monday 06 April 2026 06:45:17 +0000 (0:00:19.276) 0:02:26.180 ********** 2026-04-06 06:46:11.206869 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:46:11.206880 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:46:11.206890 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:46:11.206904 | orchestrator | 2026-04-06 06:46:11.206917 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-06 06:46:11.206930 | orchestrator | Monday 06 April 2026 06:45:30 +0000 (0:00:13.649) 0:02:39.830 ********** 2026-04-06 06:46:11.206943 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:46:11.206955 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:46:11.206968 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:46:11.206981 | orchestrator | 2026-04-06 06:46:11.206993 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-06 06:46:11.207006 | orchestrator | Monday 06 April 2026 06:45:44 +0000 (0:00:13.055) 0:02:52.885 ********** 2026-04-06 06:46:11.207019 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:46:11.207031 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:46:11.207044 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:46:11.207058 | orchestrator | 2026-04-06 06:46:11.207070 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-06 06:46:11.207083 | orchestrator | Monday 06 April 2026 06:45:57 +0000 (0:00:13.312) 0:03:06.197 ********** 2026-04-06 06:46:11.207095 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:46:11.207108 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:46:11.207121 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:46:11.207133 | orchestrator | 2026-04-06 06:46:11.207144 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:46:11.207156 | orchestrator | testbed-node-0 : ok=27  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 06:46:11.207167 | orchestrator | testbed-node-1 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 06:46:11.207178 | orchestrator | testbed-node-2 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 06:46:11.207189 | orchestrator | 2026-04-06 06:46:11.207200 | orchestrator | 2026-04-06 06:46:11.207211 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:46:11.207222 | orchestrator | Monday 06 April 2026 06:46:10 +0000 (0:00:13.425) 0:03:19.623 ********** 2026-04-06 06:46:11.207233 | orchestrator | =============================================================================== 2026-04-06 06:46:11.207243 | orchestrator | octavia : Restart octavia-api container -------------------------------- 19.28s 2026-04-06 06:46:11.207254 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.58s 2026-04-06 06:46:11.207265 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 13.65s 2026-04-06 06:46:11.207290 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 13.43s 2026-04-06 06:46:11.207351 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 13.31s 2026-04-06 06:46:11.207363 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 13.13s 2026-04-06 06:46:11.207373 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 13.06s 2026-04-06 06:46:11.207385 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.03s 2026-04-06 06:46:11.207396 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.46s 2026-04-06 06:46:11.207407 | orchestrator | octavia : Copying over config.json files for services ------------------- 6.44s 2026-04-06 06:46:11.207417 | orchestrator | service-check-containers : octavia | Check containers ------------------- 6.21s 2026-04-06 06:46:11.207437 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.08s 2026-04-06 06:46:11.207448 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 6.05s 2026-04-06 06:46:11.207458 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.96s 2026-04-06 06:46:11.207486 | orchestrator | octavia : Get amphora flavor info --------------------------------------- 5.09s 2026-04-06 06:46:11.207498 | orchestrator | octavia : Get loadbalancer management network --------------------------- 4.57s 2026-04-06 06:46:11.207513 | orchestrator | octavia : Get service project id ---------------------------------------- 4.13s 2026-04-06 06:46:11.207524 | orchestrator | octavia : Ensuring config directories exist ----------------------------- 3.77s 2026-04-06 06:46:11.207535 | orchestrator | octavia : include_tasks ------------------------------------------------- 3.06s 2026-04-06 06:46:11.207546 | orchestrator | octavia : Copying over Octavia SSH key ---------------------------------- 2.79s 2026-04-06 06:46:11.384404 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-06 06:46:11.384495 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/310-openstack-extended.sh 2026-04-06 06:46:12.766831 | orchestrator | 2026-04-06 06:46:12 | INFO  | Prepare task for execution of gnocchi. 2026-04-06 06:46:12.831295 | orchestrator | 2026-04-06 06:46:12 | INFO  | Task 6af40d9f-884e-4cc5-9efa-c0b8f8718ed0 (gnocchi) was prepared for execution. 2026-04-06 06:46:12.831422 | orchestrator | 2026-04-06 06:46:12 | INFO  | It takes a moment until task 6af40d9f-884e-4cc5-9efa-c0b8f8718ed0 (gnocchi) has been started and output is visible here. 2026-04-06 06:46:25.118357 | orchestrator | 2026-04-06 06:46:25.118489 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:46:25.118516 | orchestrator | 2026-04-06 06:46:25.118538 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:46:25.118558 | orchestrator | Monday 06 April 2026 06:46:17 +0000 (0:00:01.512) 0:00:01.512 ********** 2026-04-06 06:46:25.118577 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:46:25.118598 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:46:25.118619 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:46:25.118639 | orchestrator | 2026-04-06 06:46:25.118659 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:46:25.118680 | orchestrator | Monday 06 April 2026 06:46:19 +0000 (0:00:01.812) 0:00:03.324 ********** 2026-04-06 06:46:25.118699 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-06 06:46:25.118720 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-06 06:46:25.118741 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-06 06:46:25.118760 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-06 06:46:25.118800 | orchestrator | 2026-04-06 06:46:25.118820 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-06 06:46:25.118842 | orchestrator | skipping: no hosts matched 2026-04-06 06:46:25.118862 | orchestrator | 2026-04-06 06:46:25.118883 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:46:25.118905 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 06:46:25.118956 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 06:46:25.118979 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-06 06:46:25.119000 | orchestrator | 2026-04-06 06:46:25.119021 | orchestrator | 2026-04-06 06:46:25.119042 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:46:25.119063 | orchestrator | Monday 06 April 2026 06:46:24 +0000 (0:00:05.344) 0:00:08.669 ********** 2026-04-06 06:46:25.119084 | orchestrator | =============================================================================== 2026-04-06 06:46:25.119135 | orchestrator | Group hosts based on enabled services ----------------------------------- 5.34s 2026-04-06 06:46:25.119154 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.81s 2026-04-06 06:46:26.693238 | orchestrator | 2026-04-06 06:46:26 | INFO  | Prepare task for execution of manila. 2026-04-06 06:46:26.760516 | orchestrator | 2026-04-06 06:46:26 | INFO  | Task d730673a-96e6-4788-b9ab-330405a37179 (manila) was prepared for execution. 2026-04-06 06:46:26.760593 | orchestrator | 2026-04-06 06:46:26 | INFO  | It takes a moment until task d730673a-96e6-4788-b9ab-330405a37179 (manila) has been started and output is visible here. 2026-04-06 06:46:41.241004 | orchestrator | 2026-04-06 06:46:41.241141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:46:41.241169 | orchestrator | 2026-04-06 06:46:41.241187 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:46:41.241205 | orchestrator | Monday 06 April 2026 06:46:31 +0000 (0:00:01.451) 0:00:01.451 ********** 2026-04-06 06:46:41.241223 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:46:41.241243 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:46:41.241261 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:46:41.241341 | orchestrator | 2026-04-06 06:46:41.241362 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:46:41.241394 | orchestrator | Monday 06 April 2026 06:46:33 +0000 (0:00:01.956) 0:00:03.407 ********** 2026-04-06 06:46:41.241406 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-06 06:46:41.241428 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-06 06:46:41.241440 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-06 06:46:41.241451 | orchestrator | 2026-04-06 06:46:41.241461 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-06 06:46:41.241472 | orchestrator | 2026-04-06 06:46:41.241483 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-06 06:46:41.241494 | orchestrator | Monday 06 April 2026 06:46:35 +0000 (0:00:02.332) 0:00:05.740 ********** 2026-04-06 06:46:41.241505 | orchestrator | included: /ansible/roles/manila/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:46:41.241517 | orchestrator | 2026-04-06 06:46:41.241528 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-06 06:46:41.241539 | orchestrator | Monday 06 April 2026 06:46:39 +0000 (0:00:03.254) 0:00:08.994 ********** 2026-04-06 06:46:41.241556 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:46:41.241576 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:46:41.241616 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:46:41.241659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:46:41.241674 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:46:41.241688 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:46:41.241747 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:46:41.241772 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:46:41.241785 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:46:41.241814 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:46:58.920677 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:46:58.920814 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:46:58.920833 | orchestrator | 2026-04-06 06:46:58.920847 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-06 06:46:58.920860 | orchestrator | Monday 06 April 2026 06:46:42 +0000 (0:00:03.455) 0:00:12.450 ********** 2026-04-06 06:46:58.920874 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:46:58.920894 | orchestrator | 2026-04-06 06:46:58.920912 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-06 06:46:58.920933 | orchestrator | Monday 06 April 2026 06:46:44 +0000 (0:00:01.859) 0:00:14.310 ********** 2026-04-06 06:46:58.920977 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:46:58.920991 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:46:58.921002 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:46:58.921013 | orchestrator | 2026-04-06 06:46:58.921024 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-06 06:46:58.921035 | orchestrator | Monday 06 April 2026 06:46:46 +0000 (0:00:01.973) 0:00:16.284 ********** 2026-04-06 06:46:58.921047 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 06:46:58.921060 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 06:46:58.921071 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 06:46:58.921082 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 06:46:58.921093 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 06:46:58.921104 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 06:46:58.921115 | orchestrator | 2026-04-06 06:46:58.921126 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-06 06:46:58.921141 | orchestrator | Monday 06 April 2026 06:46:48 +0000 (0:00:02.500) 0:00:18.784 ********** 2026-04-06 06:46:58.921161 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 06:46:58.921180 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 06:46:58.921215 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 06:46:58.921229 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 06:46:58.921260 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-06 06:46:58.921554 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-06 06:46:58.921570 | orchestrator | 2026-04-06 06:46:58.921583 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-06 06:46:58.921597 | orchestrator | Monday 06 April 2026 06:46:51 +0000 (0:00:02.370) 0:00:21.155 ********** 2026-04-06 06:46:58.921610 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-06 06:46:58.921622 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-06 06:46:58.921633 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-06 06:46:58.921644 | orchestrator | 2026-04-06 06:46:58.921655 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-06 06:46:58.921665 | orchestrator | Monday 06 April 2026 06:46:53 +0000 (0:00:01.997) 0:00:23.153 ********** 2026-04-06 06:46:58.921676 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:46:58.921700 | orchestrator | 2026-04-06 06:46:58.921711 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-06 06:46:58.921722 | orchestrator | Monday 06 April 2026 06:46:54 +0000 (0:00:01.131) 0:00:24.284 ********** 2026-04-06 06:46:58.921733 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:46:58.921743 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:46:58.921754 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:46:58.921765 | orchestrator | 2026-04-06 06:46:58.921775 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-06 06:46:58.921786 | orchestrator | Monday 06 April 2026 06:46:55 +0000 (0:00:01.352) 0:00:25.636 ********** 2026-04-06 06:46:58.921797 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:46:58.921808 | orchestrator | 2026-04-06 06:46:58.921818 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-06 06:46:58.921830 | orchestrator | Monday 06 April 2026 06:46:57 +0000 (0:00:01.864) 0:00:27.501 ********** 2026-04-06 06:46:58.921843 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:46:58.921857 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:46:58.921911 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:47:02.992076 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:02.992156 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:02.992165 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:02.992173 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:02.992182 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:02.992203 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:02.992225 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:02.992250 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:02.992257 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:02.992351 | orchestrator | 2026-04-06 06:47:02.992360 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-06 06:47:02.992368 | orchestrator | Monday 06 April 2026 06:47:02 +0000 (0:00:04.930) 0:00:32.431 ********** 2026-04-06 06:47:02.992377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:02.992390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:02.992405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:05.147310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:05.147416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:05.147439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:05.147484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:05.147517 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:47:05.147556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:05.147622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:05.147644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:05.147656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:05.147666 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:47:05.147676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:05.147686 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:47:05.147696 | orchestrator | 2026-04-06 06:47:05.147708 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-06 06:47:05.147719 | orchestrator | Monday 06 April 2026 06:47:04 +0000 (0:00:02.130) 0:00:34.562 ********** 2026-04-06 06:47:05.147736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:05.147756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:05.147775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:08.339619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:08.339741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:08.339780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:08.339817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:08.339829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:08.339841 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:47:08.339877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:08.339891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:08.339902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:08.339914 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:47:08.339926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:08.339945 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:47:08.339957 | orchestrator | 2026-04-06 06:47:08.339976 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-06 06:47:08.339998 | orchestrator | Monday 06 April 2026 06:47:06 +0000 (0:00:02.297) 0:00:36.859 ********** 2026-04-06 06:47:08.340019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:47:08.340051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:47:14.671609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:47:14.671704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:14.671745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:14.671752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:14.671758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:14.671779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:14.671785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:14.671791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:14.671805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:14.671811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:14.671817 | orchestrator | 2026-04-06 06:47:14.671824 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-06 06:47:14.671831 | orchestrator | Monday 06 April 2026 06:47:12 +0000 (0:00:05.272) 0:00:42.131 ********** 2026-04-06 06:47:14.671836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:47:14.671847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:47:25.415398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:47:25.415551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:25.415571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:25.415584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:25.415595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:25.415625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:25.415646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:25.415663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:25.415676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:25.415687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:25.415699 | orchestrator | 2026-04-06 06:47:25.415712 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-06 06:47:25.415724 | orchestrator | Monday 06 April 2026 06:47:20 +0000 (0:00:07.857) 0:00:49.989 ********** 2026-04-06 06:47:25.415736 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-06 06:47:25.415747 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-06 06:47:25.415758 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-06 06:47:25.415769 | orchestrator | 2026-04-06 06:47:25.415780 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-06 06:47:25.415790 | orchestrator | Monday 06 April 2026 06:47:24 +0000 (0:00:04.812) 0:00:54.801 ********** 2026-04-06 06:47:25.415809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:28.413870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:28.414006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:28.414086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:28.414101 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:47:28.414115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:28.414128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:28.414184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:28.414197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:28.414209 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:47:28.414226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:28.414238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:28.414359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:28.414373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:28.414395 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:47:28.414409 | orchestrator | 2026-04-06 06:47:28.414423 | orchestrator | TASK [service-check-containers : manila | Check containers] ******************** 2026-04-06 06:47:28.414437 | orchestrator | Monday 06 April 2026 06:47:27 +0000 (0:00:02.284) 0:00:57.085 ********** 2026-04-06 06:47:28.414461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:47:32.401339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:47:32.401473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:47:32.401503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:32.401550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:32.401571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:32.401615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:32.401696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:32.401719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:32.401739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:32.401771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:32.401792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-06 06:47:32.401812 | orchestrator | 2026-04-06 06:47:32.401833 | orchestrator | TASK [service-check-containers : manila | Notify handlers to restart containers] *** 2026-04-06 06:47:32.401853 | orchestrator | Monday 06 April 2026 06:47:32 +0000 (0:00:04.932) 0:01:02.018 ********** 2026-04-06 06:47:32.401873 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:47:32.401893 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:47:32.401912 | orchestrator | } 2026-04-06 06:47:32.401932 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:47:32.401951 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:47:32.401970 | orchestrator | } 2026-04-06 06:47:32.401988 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:47:32.402090 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:47:34.277676 | orchestrator | } 2026-04-06 06:47:34.277778 | orchestrator | 2026-04-06 06:47:34.277793 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:47:34.277806 | orchestrator | Monday 06 April 2026 06:47:33 +0000 (0:00:01.408) 0:01:03.427 ********** 2026-04-06 06:47:34.277839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:34.277855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:34.277890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:34.277909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:34.277929 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:47:34.277970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:34.277999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:34.278080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:47:34.278107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:47:34.278119 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:47:34.278130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:47:34.278142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-06 06:47:34.278163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-06 06:51:13.032924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-06 06:51:13.033076 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:51:13.033096 | orchestrator | 2026-04-06 06:51:13.033109 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-06 06:51:13.033204 | orchestrator | Monday 06 April 2026 06:47:35 +0000 (0:00:02.427) 0:01:05.855 ********** 2026-04-06 06:51:13.033250 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:51:13.033263 | orchestrator | 2026-04-06 06:51:13.033275 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-06 06:51:13.033285 | orchestrator | Monday 06 April 2026 06:47:54 +0000 (0:00:18.751) 0:01:24.606 ********** 2026-04-06 06:51:13.033296 | orchestrator | 2026-04-06 06:51:13.033307 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-06 06:51:13.033318 | orchestrator | Monday 06 April 2026 06:47:55 +0000 (0:00:00.449) 0:01:25.055 ********** 2026-04-06 06:51:13.033329 | orchestrator | 2026-04-06 06:51:13.033352 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-06 06:51:13.033363 | orchestrator | Monday 06 April 2026 06:47:55 +0000 (0:00:00.440) 0:01:25.496 ********** 2026-04-06 06:51:13.033374 | orchestrator | 2026-04-06 06:51:13.033385 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-06 06:51:13.033396 | orchestrator | Monday 06 April 2026 06:47:56 +0000 (0:00:00.779) 0:01:26.275 ********** 2026-04-06 06:51:13.033407 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:51:13.033418 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:51:13.033429 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:51:13.033439 | orchestrator | 2026-04-06 06:51:13.033450 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-06 06:51:13.033461 | orchestrator | Monday 06 April 2026 06:48:14 +0000 (0:00:17.790) 0:01:44.066 ********** 2026-04-06 06:51:13.033472 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:51:13.033483 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:51:13.033497 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:51:13.033517 | orchestrator | 2026-04-06 06:51:13.033536 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-06 06:51:13.033554 | orchestrator | Monday 06 April 2026 06:48:37 +0000 (0:00:23.631) 0:02:07.699 ********** 2026-04-06 06:51:13.033572 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:51:13.033590 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:51:13.033606 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:51:13.033624 | orchestrator | 2026-04-06 06:51:13.033643 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-06 06:51:13.033662 | orchestrator | Monday 06 April 2026 06:48:51 +0000 (0:00:13.476) 0:02:21.175 ********** 2026-04-06 06:51:13.033681 | orchestrator | 2026-04-06 06:51:13.033699 | orchestrator | STILL ALIVE [task 'manila : Restart manila-share container' is running] ******** 2026-04-06 06:51:13.033717 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:51:13.033734 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:51:13.033753 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:51:13.033773 | orchestrator | 2026-04-06 06:51:13.033793 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:51:13.033806 | orchestrator | testbed-node-0 : ok=21  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 06:51:13.033818 | orchestrator | testbed-node-1 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 06:51:13.033829 | orchestrator | testbed-node-2 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 06:51:13.033840 | orchestrator | 2026-04-06 06:51:13.033851 | orchestrator | 2026-04-06 06:51:13.033862 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:51:13.033873 | orchestrator | Monday 06 April 2026 06:51:12 +0000 (0:02:21.349) 0:04:42.525 ********** 2026-04-06 06:51:13.033884 | orchestrator | =============================================================================== 2026-04-06 06:51:13.033895 | orchestrator | manila : Restart manila-share container ------------------------------- 141.35s 2026-04-06 06:51:13.033906 | orchestrator | manila : Restart manila-data container --------------------------------- 23.63s 2026-04-06 06:51:13.033927 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 18.75s 2026-04-06 06:51:13.033938 | orchestrator | manila : Restart manila-api container ---------------------------------- 17.79s 2026-04-06 06:51:13.033949 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 13.48s 2026-04-06 06:51:13.033960 | orchestrator | manila : Copying over manila.conf --------------------------------------- 7.86s 2026-04-06 06:51:13.033971 | orchestrator | manila : Copying over config.json files for services -------------------- 5.27s 2026-04-06 06:51:13.033982 | orchestrator | service-check-containers : manila | Check containers -------------------- 4.93s 2026-04-06 06:51:13.033993 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.93s 2026-04-06 06:51:13.034090 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 4.81s 2026-04-06 06:51:13.034106 | orchestrator | manila : Ensuring config directories exist ------------------------------ 3.45s 2026-04-06 06:51:13.034117 | orchestrator | manila : include_tasks -------------------------------------------------- 3.26s 2026-04-06 06:51:13.034167 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 2.50s 2026-04-06 06:51:13.034178 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.43s 2026-04-06 06:51:13.034189 | orchestrator | manila : Copy over ceph Manila keyrings --------------------------------- 2.37s 2026-04-06 06:51:13.034200 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.33s 2026-04-06 06:51:13.034211 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS key ------ 2.30s 2026-04-06 06:51:13.034222 | orchestrator | manila : Copying over existing policy file ------------------------------ 2.28s 2026-04-06 06:51:13.034233 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS certificate --- 2.13s 2026-04-06 06:51:13.034245 | orchestrator | manila : Ensuring config directory has correct owner and permission ----- 2.00s 2026-04-06 06:51:13.251233 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-06 06:51:13.251314 | orchestrator | + osism migrate rabbitmq3to4 delete 2026-04-06 06:51:19.594762 | orchestrator | 2026-04-06 06:51:19 | ERROR  | Unable to get ansible vault password 2026-04-06 06:51:19.594869 | orchestrator | 2026-04-06 06:51:19 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-06 06:51:19.594887 | orchestrator | 2026-04-06 06:51:19 | ERROR  | Dropping encrypted entries 2026-04-06 06:51:19.628534 | orchestrator | 2026-04-06 06:51:19 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-06 06:51:19.890387 | orchestrator | 2026-04-06 06:51:19 | INFO  | Found 126 classic queue(s) in vhost '/' 2026-04-06 06:51:19.947094 | orchestrator | 2026-04-06 06:51:19 | INFO  | Deleted queue: alarm.all.sample 2026-04-06 06:51:19.999924 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: alarming.sample 2026-04-06 06:51:20.084673 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: barbican.workers 2026-04-06 06:51:20.150050 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: barbican.workers.barbican.queue 2026-04-06 06:51:20.188596 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: barbican.workers_fanout_4b0ceb6861d44a2aaee96c7d9f96eea7 2026-04-06 06:51:20.231268 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: barbican.workers_fanout_d74bdd9e69484f5bb853704613a82935 2026-04-06 06:51:20.276458 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: barbican.workers_fanout_dba43a0bf60049bb98dc4e73ca687fce 2026-04-06 06:51:20.331929 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: barbican_notifications.info 2026-04-06 06:51:20.381426 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: central 2026-04-06 06:51:20.418520 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: central.testbed-node-0 2026-04-06 06:51:20.468458 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: central.testbed-node-1 2026-04-06 06:51:20.520927 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: central.testbed-node-2 2026-04-06 06:51:20.547703 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: central_fanout_5e634e47b0254d269eab3c4ea5cece45 2026-04-06 06:51:20.587952 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: central_fanout_9ff81e7645b142ac8e3d2673833f9aab 2026-04-06 06:51:20.628859 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: central_fanout_b64802bbbf1e4125b54f6ab4b6d47267 2026-04-06 06:51:20.674569 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: central_fanout_f16898539dc8441d9b450f8cf58ee549 2026-04-06 06:51:20.725523 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: cinder-backup 2026-04-06 06:51:20.760961 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: cinder-backup.testbed-node-0 2026-04-06 06:51:20.796697 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: cinder-backup.testbed-node-1 2026-04-06 06:51:20.838741 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: cinder-backup.testbed-node-2 2026-04-06 06:51:20.884174 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: cinder-scheduler 2026-04-06 06:51:20.929926 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: cinder-scheduler.testbed-node-0 2026-04-06 06:51:20.965671 | orchestrator | 2026-04-06 06:51:20 | INFO  | Deleted queue: cinder-scheduler.testbed-node-1 2026-04-06 06:51:21.015214 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: cinder-scheduler.testbed-node-2 2026-04-06 06:51:21.067184 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: cinder-volume 2026-04-06 06:51:21.108724 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes 2026-04-06 06:51:21.162162 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 2026-04-06 06:51:21.208466 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes 2026-04-06 06:51:21.254351 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 2026-04-06 06:51:21.301230 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes 2026-04-06 06:51:21.351372 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 2026-04-06 06:51:21.397043 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: compute 2026-04-06 06:51:21.443143 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: compute.testbed-node-3 2026-04-06 06:51:21.489386 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: compute.testbed-node-4 2026-04-06 06:51:21.538720 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: compute.testbed-node-5 2026-04-06 06:51:21.580237 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: conductor 2026-04-06 06:51:21.625052 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: conductor.testbed-node-0 2026-04-06 06:51:21.670506 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: conductor.testbed-node-1 2026-04-06 06:51:21.721538 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: conductor.testbed-node-2 2026-04-06 06:51:21.772719 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: event.sample 2026-04-06 06:51:21.807995 | orchestrator | 2026-04-06 06:51:21 | INFO  | Closed connection: 192.168.16.12:42866 -> 192.168.16.11:5672 2026-04-06 06:51:21.824190 | orchestrator | 2026-04-06 06:51:21 | INFO  | Closed connection: 192.168.16.12:46196 -> 192.168.16.10:5672 2026-04-06 06:51:21.837408 | orchestrator | 2026-04-06 06:51:21 | INFO  | Closed connection: 192.168.16.11:42168 -> 192.168.16.10:5672 2026-04-06 06:51:21.856698 | orchestrator | 2026-04-06 06:51:21 | INFO  | Closed connection: 192.168.16.11:56558 -> 192.168.16.11:5672 2026-04-06 06:51:21.877367 | orchestrator | 2026-04-06 06:51:21 | INFO  | Closed connection: 192.168.16.11:56536 -> 192.168.16.11:5672 2026-04-06 06:51:21.892950 | orchestrator | 2026-04-06 06:51:21 | INFO  | Closed connection: 192.168.16.10:42740 -> 192.168.16.10:5672 2026-04-06 06:51:21.908784 | orchestrator | 2026-04-06 06:51:21 | INFO  | Closed connection: 192.168.16.10:42756 -> 192.168.16.10:5672 2026-04-06 06:51:21.922172 | orchestrator | 2026-04-06 06:51:21 | INFO  | Closed connection: 192.168.16.10:35250 -> 192.168.16.11:5672 2026-04-06 06:51:21.938319 | orchestrator | 2026-04-06 06:51:21 | INFO  | Closed connection: 192.168.16.12:46200 -> 192.168.16.10:5672 2026-04-06 06:51:21.939302 | orchestrator | 2026-04-06 06:51:21 | INFO  | Closed 9 connection(s) for queue: magnum-conductor 2026-04-06 06:51:21.971581 | orchestrator | 2026-04-06 06:51:21 | INFO  | Deleted queue: magnum-conductor 2026-04-06 06:51:22.020084 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor.cdopvc2wtmwa 2026-04-06 06:51:22.070568 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor.hyvpxyptbmqb 2026-04-06 06:51:22.119878 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor.zymsmiuafytu 2026-04-06 06:51:22.152613 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor_fanout_47cc7db69c0a44e6b4a557a70b3ff099 2026-04-06 06:51:22.187472 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor_fanout_4c054414bcc54bd48f9686b389903201 2026-04-06 06:51:22.225878 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor_fanout_9f11861ca63f4951abc14e0976135662 2026-04-06 06:51:22.276372 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor_fanout_a9bda3b9ea7648e69bf55296f8ba80dd 2026-04-06 06:51:22.317263 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor_fanout_b116af4aa976406289281abc1c25e974 2026-04-06 06:51:22.359256 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor_fanout_b7a82fba044d44ca86922e21db8f08e1 2026-04-06 06:51:22.397753 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor_fanout_d11d8229005a4a02a89bb0d3b335f827 2026-04-06 06:51:22.439245 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor_fanout_d62fe757ae6148479fcad4fcbc3dd7d2 2026-04-06 06:51:22.483465 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: magnum-conductor_fanout_f75dacc656e14a6fa8f33dd0802dafc6 2026-04-06 06:51:22.527710 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-data 2026-04-06 06:51:22.574226 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-data.testbed-node-0 2026-04-06 06:51:22.632198 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-data.testbed-node-1 2026-04-06 06:51:22.679212 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-data.testbed-node-2 2026-04-06 06:51:22.725855 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-scheduler 2026-04-06 06:51:22.767550 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-scheduler.testbed-node-0 2026-04-06 06:51:22.805397 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-scheduler.testbed-node-1 2026-04-06 06:51:22.849701 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-scheduler.testbed-node-2 2026-04-06 06:51:22.898424 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-share 2026-04-06 06:51:22.936418 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-share.testbed-node-0@cephfsnative1 2026-04-06 06:51:22.989185 | orchestrator | 2026-04-06 06:51:22 | INFO  | Deleted queue: manila-share.testbed-node-1@cephfsnative1 2026-04-06 06:51:23.033645 | orchestrator | 2026-04-06 06:51:23 | INFO  | Deleted queue: manila-share.testbed-node-2@cephfsnative1 2026-04-06 06:51:23.074967 | orchestrator | 2026-04-06 06:51:23 | INFO  | Deleted queue: manila-share_fanout_02e138ef57dc4cad844f197973ad801a 2026-04-06 06:51:23.104806 | orchestrator | 2026-04-06 06:51:23 | INFO  | Deleted queue: manila-share_fanout_0451adc48901463d9f5c57d59d15d785 2026-04-06 06:51:23.140592 | orchestrator | 2026-04-06 06:51:23 | INFO  | Deleted queue: manila-share_fanout_58f7c1b614e644a89852e82d0a3f93de 2026-04-06 06:51:23.294583 | orchestrator | 2026-04-06 06:51:23 | INFO  | Deleted queue: notifications.audit 2026-04-06 06:51:23.462363 | orchestrator | 2026-04-06 06:51:23 | INFO  | Deleted queue: notifications.critical 2026-04-06 06:51:23.619693 | orchestrator | 2026-04-06 06:51:23 | INFO  | Deleted queue: notifications.debug 2026-04-06 06:51:23.794733 | orchestrator | 2026-04-06 06:51:23 | INFO  | Deleted queue: notifications.error 2026-04-06 06:51:23.947352 | orchestrator | 2026-04-06 06:51:23 | INFO  | Deleted queue: notifications.info 2026-04-06 06:51:24.089970 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: notifications.sample 2026-04-06 06:51:24.262738 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: notifications.warn 2026-04-06 06:51:24.306389 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: octavia_provisioning_v2 2026-04-06 06:51:24.360830 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-0 2026-04-06 06:51:24.406412 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-1 2026-04-06 06:51:24.452747 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-2 2026-04-06 06:51:24.506966 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: producer 2026-04-06 06:51:24.547501 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: producer.testbed-node-0 2026-04-06 06:51:24.600807 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: producer.testbed-node-1 2026-04-06 06:51:24.649058 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: producer.testbed-node-2 2026-04-06 06:51:24.693628 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: producer_fanout_1e1ec29c93dd429fbfce14530190b826 2026-04-06 06:51:24.735637 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: producer_fanout_1e93b44812e64817852c760c4f6c519e 2026-04-06 06:51:24.770659 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: producer_fanout_3e2654ca7dda41339d8beafd12c64f00 2026-04-06 06:51:24.803795 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: producer_fanout_af2b883e305444cdab9a6d2c2efbafe7 2026-04-06 06:51:24.837195 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: producer_fanout_e814c57ff9484833b35a43b7e566cc83 2026-04-06 06:51:24.879260 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: producer_fanout_f01beabb3aed46b48c9a55ce4162c5b3 2026-04-06 06:51:24.914451 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: q-plugin 2026-04-06 06:51:24.953476 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: q-plugin.testbed-node-0 2026-04-06 06:51:24.995181 | orchestrator | 2026-04-06 06:51:24 | INFO  | Deleted queue: q-plugin.testbed-node-1 2026-04-06 06:51:25.050361 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: q-plugin.testbed-node-2 2026-04-06 06:51:25.095254 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: q-reports-plugin 2026-04-06 06:51:25.134090 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: q-reports-plugin.testbed-node-0 2026-04-06 06:51:25.176367 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: q-reports-plugin.testbed-node-1 2026-04-06 06:51:25.220057 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: q-reports-plugin.testbed-node-2 2026-04-06 06:51:25.259902 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: q-server-resource-versions 2026-04-06 06:51:25.293283 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-0 2026-04-06 06:51:25.330845 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-1 2026-04-06 06:51:25.369155 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-2 2026-04-06 06:51:25.404547 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_008afd28d9664b5c85ff7a1cd9b8cbe5 2026-04-06 06:51:25.438627 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_15147918694740b5b003bf2ef0d26dcd 2026-04-06 06:51:25.478579 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_45aa96e5a36e42d994ca48f8f496d790 2026-04-06 06:51:25.519250 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_540a7fd733ff43f8b958d50b65d7e39f 2026-04-06 06:51:25.556311 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_67d9ebc349b3442b99141479d75b1616 2026-04-06 06:51:25.585847 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_9ea6d5684f93484cb7e337a9a581af75 2026-04-06 06:51:25.619309 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_ad85ce300141453d8085f878f1d10105 2026-04-06 06:51:25.654686 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_d187808885d44e0e9d1c8850bcfc560f 2026-04-06 06:51:25.686225 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_d8374e2e5e0b456888092b7938713472 2026-04-06 06:51:25.717966 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_d84ff637bd9444b595ecf1e39baa5bbc 2026-04-06 06:51:25.754318 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: reply_f2a4c05ee22e4919b52fdefeb00ec7d5 2026-04-06 06:51:25.790748 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: scheduler 2026-04-06 06:51:25.833322 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: scheduler.testbed-node-0 2026-04-06 06:51:25.875988 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: scheduler.testbed-node-1 2026-04-06 06:51:25.912767 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: scheduler.testbed-node-2 2026-04-06 06:51:25.960920 | orchestrator | 2026-04-06 06:51:25 | INFO  | Deleted queue: worker 2026-04-06 06:51:25.999811 | orchestrator | 2026-04-06 06:51:26 | INFO  | Deleted queue: worker.testbed-node-0 2026-04-06 06:51:26.037433 | orchestrator | 2026-04-06 06:51:26 | INFO  | Deleted queue: worker.testbed-node-1 2026-04-06 06:51:26.079682 | orchestrator | 2026-04-06 06:51:26 | INFO  | Deleted queue: worker.testbed-node-2 2026-04-06 06:51:26.120906 | orchestrator | 2026-04-06 06:51:26 | INFO  | Deleted queue: worker_fanout_57e7a5b17c3944c2941c1cc9b14e9b69 2026-04-06 06:51:26.158012 | orchestrator | 2026-04-06 06:51:26 | INFO  | Deleted queue: worker_fanout_5c8e381e9869471f892d3586dfd420c6 2026-04-06 06:51:26.195355 | orchestrator | 2026-04-06 06:51:26 | INFO  | Deleted queue: worker_fanout_6c534516e00d43b397b901c97429e317 2026-04-06 06:51:26.231697 | orchestrator | 2026-04-06 06:51:26 | INFO  | Deleted queue: worker_fanout_975bcb0093644f58aed043de5f7d79cd 2026-04-06 06:51:26.271284 | orchestrator | 2026-04-06 06:51:26 | INFO  | Deleted queue: worker_fanout_9d0bb9a36fae46939faca2ce4f5c677d 2026-04-06 06:51:26.311209 | orchestrator | 2026-04-06 06:51:26 | INFO  | Deleted queue: worker_fanout_f9759c7e8268430dafef6561ce71a1fe 2026-04-06 06:51:26.311302 | orchestrator | 2026-04-06 06:51:26 | INFO  | Successfully deleted 126 queue(s) in vhost '/' 2026-04-06 06:51:26.618393 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-06 06:51:32.852250 | orchestrator | 2026-04-06 06:51:32 | ERROR  | Unable to get ansible vault password 2026-04-06 06:51:32.852358 | orchestrator | 2026-04-06 06:51:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-06 06:51:32.852630 | orchestrator | 2026-04-06 06:51:32 | ERROR  | Dropping encrypted entries 2026-04-06 06:51:32.889992 | orchestrator | 2026-04-06 06:51:32 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-06 06:51:33.083229 | orchestrator | 2026-04-06 06:51:33 | INFO  | Found 13 classic queue(s) in vhost '/': 2026-04-06 06:51:33.083319 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-06 06:51:33.083332 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor.cdopvc2wtmwa (vhost: /, messages: 0) 2026-04-06 06:51:33.083340 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor.hyvpxyptbmqb (vhost: /, messages: 0) 2026-04-06 06:51:33.083347 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor.zymsmiuafytu (vhost: /, messages: 0) 2026-04-06 06:51:33.083428 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor_fanout_47cc7db69c0a44e6b4a557a70b3ff099 (vhost: /, messages: 0) 2026-04-06 06:51:33.083442 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor_fanout_4c054414bcc54bd48f9686b389903201 (vhost: /, messages: 0) 2026-04-06 06:51:33.083523 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor_fanout_9f11861ca63f4951abc14e0976135662 (vhost: /, messages: 0) 2026-04-06 06:51:33.083876 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor_fanout_a9bda3b9ea7648e69bf55296f8ba80dd (vhost: /, messages: 0) 2026-04-06 06:51:33.083901 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor_fanout_b116af4aa976406289281abc1c25e974 (vhost: /, messages: 0) 2026-04-06 06:51:33.083909 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor_fanout_b7a82fba044d44ca86922e21db8f08e1 (vhost: /, messages: 0) 2026-04-06 06:51:33.083951 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor_fanout_d11d8229005a4a02a89bb0d3b335f827 (vhost: /, messages: 0) 2026-04-06 06:51:33.084039 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor_fanout_d62fe757ae6148479fcad4fcbc3dd7d2 (vhost: /, messages: 0) 2026-04-06 06:51:33.084142 | orchestrator | 2026-04-06 06:51:33 | INFO  |  - magnum-conductor_fanout_f75dacc656e14a6fa8f33dd0802dafc6 (vhost: /, messages: 0) 2026-04-06 06:51:33.407803 | orchestrator | + osism migrate rabbitmq3to4 list --vhost openstack --quorum 2026-04-06 06:51:39.632680 | orchestrator | 2026-04-06 06:51:39 | ERROR  | Unable to get ansible vault password 2026-04-06 06:51:39.632770 | orchestrator | 2026-04-06 06:51:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-06 06:51:39.632782 | orchestrator | 2026-04-06 06:51:39 | ERROR  | Dropping encrypted entries 2026-04-06 06:51:39.667626 | orchestrator | 2026-04-06 06:51:39 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-06 06:51:39.858638 | orchestrator | 2026-04-06 06:51:39 | INFO  | Found 192 quorum queue(s) in vhost 'openstack': 2026-04-06 06:51:39.858776 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - alarm.all.sample (vhost: openstack, messages: 0) 2026-04-06 06:51:39.858791 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - alarming.sample (vhost: openstack, messages: 0) 2026-04-06 06:51:39.858803 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - barbican.workers (vhost: openstack, messages: 0) 2026-04-06 06:51:39.858887 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - barbican.workers.barbican.queue (vhost: openstack, messages: 0) 2026-04-06 06:51:39.858937 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - barbican.workers_fanout_testbed-node-0:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.858953 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - barbican.workers_fanout_testbed-node-1:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.859015 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - barbican.workers_fanout_testbed-node-2:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.859029 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - barbican_notifications.info (vhost: openstack, messages: 0) 2026-04-06 06:51:39.859358 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - central (vhost: openstack, messages: 0) 2026-04-06 06:51:39.859381 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - central.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.859411 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - central.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.859905 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - central.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.859926 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - central_fanout_testbed-node-0:designate-central:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.859939 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - central_fanout_testbed-node-0:designate-central:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861170 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - central_fanout_testbed-node-1:designate-central:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861191 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - central_fanout_testbed-node-1:designate-central:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861203 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - central_fanout_testbed-node-2:designate-central:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861214 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - central_fanout_testbed-node-2:designate-central:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861225 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-backup (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861236 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-backup.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861247 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-backup.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861286 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-backup.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861298 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-backup_fanout_testbed-node-0:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861344 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-backup_fanout_testbed-node-1:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861410 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-backup_fanout_testbed-node-2:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861424 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-scheduler (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861435 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861649 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861849 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.861869 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-scheduler_fanout_testbed-node-0:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.862128 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-scheduler_fanout_testbed-node-1:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.862617 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-scheduler_fanout_testbed-node-2:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.862705 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume (vhost: openstack, messages: 0) 2026-04-06 06:51:39.862716 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: openstack, messages: 0) 2026-04-06 06:51:39.862781 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.862855 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_testbed-node-0:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.862867 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: openstack, messages: 0) 2026-04-06 06:51:39.862874 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.862930 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_testbed-node-1:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.863046 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: openstack, messages: 0) 2026-04-06 06:51:39.863280 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.863304 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_testbed-node-2:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.863352 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume_fanout_testbed-node-0:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.863582 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume_fanout_testbed-node-1:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.863766 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - cinder-volume_fanout_testbed-node-2:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.863791 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - compute (vhost: openstack, messages: 0) 2026-04-06 06:51:39.863881 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - compute.testbed-node-3 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.863891 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - compute.testbed-node-4 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.864025 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - compute.testbed-node-5 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.864183 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - compute_fanout_testbed-node-3:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.864423 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - compute_fanout_testbed-node-4:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.864434 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - compute_fanout_testbed-node-5:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.865632 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - conductor (vhost: openstack, messages: 0) 2026-04-06 06:51:39.865662 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - conductor.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.865669 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - conductor.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.865675 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - conductor.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.865992 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866005 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866060 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866070 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866076 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866082 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866088 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - event.sample (vhost: openstack, messages: 2) 2026-04-06 06:51:39.866095 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-data (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866101 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-data.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866119 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-data.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866126 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-data.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866132 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-data_fanout_testbed-node-0:manila-data:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866138 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-data_fanout_testbed-node-1:manila-data:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866193 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-data_fanout_testbed-node-2:manila-data:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866202 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-scheduler (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866236 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866244 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866290 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866298 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-scheduler_fanout_testbed-node-0:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866532 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-scheduler_fanout_testbed-node-1:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866544 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-scheduler_fanout_testbed-node-2:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866600 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-share (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866610 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866616 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866729 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866740 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-share_fanout_testbed-node-0:manila-share:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866809 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-share_fanout_testbed-node-1:manila-share:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.866931 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - manila-share_fanout_testbed-node-2:manila-share:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.867100 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - notifications.audit (vhost: openstack, messages: 0) 2026-04-06 06:51:39.867203 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - notifications.critical (vhost: openstack, messages: 0) 2026-04-06 06:51:39.867493 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - notifications.debug (vhost: openstack, messages: 0) 2026-04-06 06:51:39.867551 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - notifications.error (vhost: openstack, messages: 0) 2026-04-06 06:51:39.867863 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - notifications.info (vhost: openstack, messages: 0) 2026-04-06 06:51:39.867875 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - notifications.sample (vhost: openstack, messages: 0) 2026-04-06 06:51:39.868194 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - notifications.warn (vhost: openstack, messages: 0) 2026-04-06 06:51:39.868211 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - octavia_provisioning_v2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.868565 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.868579 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.868587 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.868594 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-0:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.868795 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-1:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.868821 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-2:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.868829 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - osism-listener-cinder (vhost: openstack, messages: 0) 2026-04-06 06:51:39.868894 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - osism-listener-glance (vhost: openstack, messages: 0) 2026-04-06 06:51:39.869126 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - osism-listener-ironic (vhost: openstack, messages: 0) 2026-04-06 06:51:39.869278 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - osism-listener-keystone (vhost: openstack, messages: 0) 2026-04-06 06:51:39.869292 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - osism-listener-neutron (vhost: openstack, messages: 0) 2026-04-06 06:51:39.869776 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - osism-listener-nova (vhost: openstack, messages: 0) 2026-04-06 06:51:39.869798 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - producer (vhost: openstack, messages: 0) 2026-04-06 06:51:39.869806 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - producer.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.869943 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - producer.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.869956 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - producer.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870074 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870228 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870243 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870306 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870317 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870519 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870697 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870723 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870751 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870841 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870855 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870912 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.870997 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871058 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871164 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871337 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871352 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871361 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871603 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871619 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871872 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871911 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871925 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871937 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.871950 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872092 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872143 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872341 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872354 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872362 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872502 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872526 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872585 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872596 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872614 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872674 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872860 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872928 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.872951 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873016 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873027 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873154 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873259 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873272 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873615 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873637 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873801 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873818 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873827 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873835 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873843 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.873851 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874081 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874106 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874135 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - reply_testbed-node-0:designate-manage:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874300 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - reply_testbed-node-0:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874313 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - reply_testbed-node-0:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874320 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - reply_testbed-node-1:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874327 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - reply_testbed-node-1:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874333 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - reply_testbed-node-2:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874410 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - reply_testbed-node-2:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874419 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - reply_testbed-node-3:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874441 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - reply_testbed-node-4:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874518 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - reply_testbed-node-5:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874721 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - scheduler (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874734 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874741 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874748 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874953 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.874965 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.875046 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.875057 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.875063 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.875576 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.875589 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - worker (vhost: openstack, messages: 0) 2026-04-06 06:51:39.875596 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - worker.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.875603 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - worker.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.875610 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - worker.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.875616 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.875660 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.876030 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.876093 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.876105 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-06 06:51:39.876144 | orchestrator | 2026-04-06 06:51:39 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-06 06:51:40.124939 | orchestrator | + osism migrate rabbitmq3to4 delete-exchanges 2026-04-06 06:51:46.308733 | orchestrator | 2026-04-06 06:51:46 | ERROR  | Unable to get ansible vault password 2026-04-06 06:51:46.308850 | orchestrator | 2026-04-06 06:51:46 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-06 06:51:46.308876 | orchestrator | 2026-04-06 06:51:46 | ERROR  | Dropping encrypted entries 2026-04-06 06:51:46.343100 | orchestrator | 2026-04-06 06:51:46 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-06 06:51:46.366358 | orchestrator | 2026-04-06 06:51:46 | INFO  | Found 27 exchange(s) in vhost '/' 2026-04-06 06:51:46.410096 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: aodh 2026-04-06 06:51:46.448966 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: ceilometer 2026-04-06 06:51:46.486368 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: cinder 2026-04-06 06:51:46.532566 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: designate 2026-04-06 06:51:46.572360 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: dns 2026-04-06 06:51:46.618137 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: glance 2026-04-06 06:51:46.651884 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: heat 2026-04-06 06:51:46.689765 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: ironic 2026-04-06 06:51:46.727020 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: keystone 2026-04-06 06:51:46.758439 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: l3_agent_fanout 2026-04-06 06:51:46.806450 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: magnum 2026-04-06 06:51:46.857623 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: magnum-conductor_fanout 2026-04-06 06:51:46.896610 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: neutron 2026-04-06 06:51:46.937248 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: neutron-vo-Network-1.1_fanout 2026-04-06 06:51:46.974300 | orchestrator | 2026-04-06 06:51:46 | INFO  | Deleted exchange: neutron-vo-Port-1.10_fanout 2026-04-06 06:51:47.008172 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: neutron-vo-SecurityGroup-1.6_fanout 2026-04-06 06:51:47.043290 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: neutron-vo-SecurityGroupRule-1.3_fanout 2026-04-06 06:51:47.080508 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: neutron-vo-Subnet-1.2_fanout 2026-04-06 06:51:47.120938 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: nova 2026-04-06 06:51:47.163337 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: octavia 2026-04-06 06:51:47.203299 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: openstack 2026-04-06 06:51:47.241069 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: q-agent-notifier-port-update_fanout 2026-04-06 06:51:47.272442 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: q-agent-notifier-security_group-update_fanout 2026-04-06 06:51:47.303505 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: scheduler_fanout 2026-04-06 06:51:47.336522 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: swift 2026-04-06 06:51:47.378008 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: trove 2026-04-06 06:51:47.411773 | orchestrator | 2026-04-06 06:51:47 | INFO  | Deleted exchange: zaqar 2026-04-06 06:51:47.411949 | orchestrator | 2026-04-06 06:51:47 | INFO  | Successfully deleted 27 exchange(s) in vhost '/' 2026-04-06 06:51:47.661329 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-06 06:51:54.112852 | orchestrator | 2026-04-06 06:51:54 | ERROR  | Unable to get ansible vault password 2026-04-06 06:51:54.112959 | orchestrator | 2026-04-06 06:51:54 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-06 06:51:54.113006 | orchestrator | 2026-04-06 06:51:54 | ERROR  | Dropping encrypted entries 2026-04-06 06:51:54.149530 | orchestrator | 2026-04-06 06:51:54 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-06 06:51:54.165302 | orchestrator | 2026-04-06 06:51:54 | INFO  | No exchanges found in vhost '/' 2026-04-06 06:51:54.432042 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-06 06:51:54.432161 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/400-monitoring.sh 2026-04-06 06:51:55.707461 | orchestrator | 2026-04-06 06:51:55 | INFO  | Prepare task for execution of prometheus. 2026-04-06 06:51:55.772546 | orchestrator | 2026-04-06 06:51:55 | INFO  | Task df0887a8-6672-4377-8b2c-286253a4fdc0 (prometheus) was prepared for execution. 2026-04-06 06:51:55.772635 | orchestrator | 2026-04-06 06:51:55 | INFO  | It takes a moment until task df0887a8-6672-4377-8b2c-286253a4fdc0 (prometheus) has been started and output is visible here. 2026-04-06 06:52:13.757584 | orchestrator | 2026-04-06 06:52:13.757678 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:52:13.757689 | orchestrator | 2026-04-06 06:52:13.757697 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:52:13.757704 | orchestrator | Monday 06 April 2026 06:52:00 +0000 (0:00:01.821) 0:00:01.821 ********** 2026-04-06 06:52:13.757712 | orchestrator | ok: [testbed-manager] 2026-04-06 06:52:13.757720 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:52:13.757727 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:52:13.757735 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:52:13.757742 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:52:13.757749 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:52:13.757756 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:52:13.757763 | orchestrator | 2026-04-06 06:52:13.757771 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:52:13.757778 | orchestrator | Monday 06 April 2026 06:52:04 +0000 (0:00:03.158) 0:00:04.980 ********** 2026-04-06 06:52:13.757786 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-06 06:52:13.757793 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-06 06:52:13.757801 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-06 06:52:13.757808 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-06 06:52:13.757815 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-06 06:52:13.757822 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-06 06:52:13.757829 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-06 06:52:13.757836 | orchestrator | 2026-04-06 06:52:13.757843 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-06 06:52:13.757850 | orchestrator | 2026-04-06 06:52:13.757857 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-06 06:52:13.757865 | orchestrator | Monday 06 April 2026 06:52:06 +0000 (0:00:02.744) 0:00:07.724 ********** 2026-04-06 06:52:13.757872 | orchestrator | included: /ansible/roles/prometheus/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 06:52:13.757881 | orchestrator | 2026-04-06 06:52:13.757888 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-06 06:52:13.757895 | orchestrator | Monday 06 April 2026 06:52:10 +0000 (0:00:04.140) 0:00:11.865 ********** 2026-04-06 06:52:13.757908 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-06 06:52:13.757940 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:13.757963 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:13.757987 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:13.757996 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:13.758004 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:13.758013 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:13.758079 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:13.758088 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:13.758122 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:13.758137 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:13.758154 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:14.558799 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:14.558903 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:14.558921 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:14.558962 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:52:14.558976 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:14.559008 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:52:14.559042 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:14.559055 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:14.559067 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:14.559088 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:14.559183 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:52:14.559204 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:14.559227 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:52:14.559239 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:14.559260 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:22.125486 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:22.125643 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:22.125662 | orchestrator | 2026-04-06 06:52:22.125676 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-06 06:52:22.125688 | orchestrator | Monday 06 April 2026 06:52:16 +0000 (0:00:05.471) 0:00:17.336 ********** 2026-04-06 06:52:22.125700 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-06 06:52:22.125713 | orchestrator | 2026-04-06 06:52:22.125724 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-06 06:52:22.125735 | orchestrator | Monday 06 April 2026 06:52:19 +0000 (0:00:02.979) 0:00:20.316 ********** 2026-04-06 06:52:22.125749 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-06 06:52:22.125764 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:22.125776 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:22.125854 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:22.125880 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:22.125891 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:22.125903 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:22.125914 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:22.125926 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:22.125943 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:22.125956 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:22.125976 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:24.210298 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:24.210401 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:24.210418 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:24.210432 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:24.210463 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:24.210475 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:24.210487 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:52:24.210544 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:52:24.210560 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:52:24.210572 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:52:24.210584 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:24.210603 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:24.210615 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:24.210634 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:24.210653 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:27.154811 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:27.154921 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:27.154936 | orchestrator | 2026-04-06 06:52:27.154947 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-06 06:52:27.155002 | orchestrator | Monday 06 April 2026 06:52:25 +0000 (0:00:06.141) 0:00:26.458 ********** 2026-04-06 06:52:27.155015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:27.155041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:27.155054 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-06 06:52:27.155086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:27.155137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:27.155148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:27.155158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:27.155167 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:52:27.155177 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:27.155207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:27.155234 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:27.155243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:27.155259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:27.763560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:27.763676 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:52:27.763732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:27.763799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:27.763821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 06:52:27.763839 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:52:27.763853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:27.763886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:27.763898 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:52:27.763911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:27.763922 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:27.763935 | orchestrator | skipping: [testbed-manager] 2026-04-06 06:52:27.763946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:27.763974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:27.763986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:27.764002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:27.764031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:30.438784 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:52:30.438868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:30.438881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 06:52:30.438889 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:52:30.438897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 06:52:30.438928 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:52:30.438935 | orchestrator | 2026-04-06 06:52:30.438943 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-06 06:52:30.438951 | orchestrator | Monday 06 April 2026 06:52:29 +0000 (0:00:03.574) 0:00:30.033 ********** 2026-04-06 06:52:30.438973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-06 06:52:30.438982 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:30.438991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:30.439012 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:30.439020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:30.439033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:30.439043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:30.439051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:30.439058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:30.439065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:30.439078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:31.500864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:31.500968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:31.501026 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:52:31.501042 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:31.501055 | orchestrator | skipping: [testbed-manager] 2026-04-06 06:52:31.501069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:31.501081 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:52:31.501143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:31.501175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:31.501188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:31.501211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:31.501240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:31.501261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 06:52:31.501293 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:52:31.501312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 06:52:31.501330 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:52:31.501350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:31.501380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:36.920592 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:52:36.920702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:52:36.920721 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:52:36.920735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:52:36.920765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:52:36.920778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 06:52:36.920789 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:52:36.920801 | orchestrator | 2026-04-06 06:52:36.920813 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-06 06:52:36.920831 | orchestrator | Monday 06 April 2026 06:52:33 +0000 (0:00:04.399) 0:00:34.432 ********** 2026-04-06 06:52:36.920854 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-06 06:52:36.920901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:36.920951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:36.920973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:36.921003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:36.921023 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:36.921035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:36.921047 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:52:36.921059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:36.921130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:39.010400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:39.010504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:39.010539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:39.010553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:39.010565 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:39.010578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:39.010618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:39.010650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:52:39.010662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:52:39.010674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:52:39.010691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:52:39.010716 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:52:39.010738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:39.010769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:52:39.010800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:53:16.052129 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:53:16.052238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:53:16.052251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:53:16.052260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:53:16.052268 | orchestrator | 2026-04-06 06:53:16.052277 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-06 06:53:16.052305 | orchestrator | Monday 06 April 2026 06:52:42 +0000 (0:00:08.592) 0:00:43.025 ********** 2026-04-06 06:53:16.052313 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 06:53:16.052321 | orchestrator | 2026-04-06 06:53:16.052329 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-06 06:53:16.052336 | orchestrator | Monday 06 April 2026 06:52:44 +0000 (0:00:02.343) 0:00:45.369 ********** 2026-04-06 06:53:16.052343 | orchestrator | skipping: [testbed-manager] 2026-04-06 06:53:16.052350 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:16.052358 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:16.052365 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:16.052372 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:16.052380 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:16.052387 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:16.052394 | orchestrator | 2026-04-06 06:53:16.052401 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-06 06:53:16.052408 | orchestrator | Monday 06 April 2026 06:52:46 +0000 (0:00:02.015) 0:00:47.384 ********** 2026-04-06 06:53:16.052415 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 06:53:16.052422 | orchestrator | 2026-04-06 06:53:16.052429 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-06 06:53:16.052436 | orchestrator | Monday 06 April 2026 06:52:48 +0000 (0:00:02.430) 0:00:49.814 ********** 2026-04-06 06:53:16.052443 | orchestrator | [WARNING]: Skipped 2026-04-06 06:53:16.052452 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052460 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-06 06:53:16.052467 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052474 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-06 06:53:16.052481 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 06:53:16.052489 | orchestrator | [WARNING]: Skipped 2026-04-06 06:53:16.052496 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052503 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-06 06:53:16.052510 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052517 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-06 06:53:16.052524 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:53:16.052532 | orchestrator | [WARNING]: Skipped 2026-04-06 06:53:16.052539 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052546 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-06 06:53:16.052566 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052574 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-06 06:53:16.052581 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-06 06:53:16.052588 | orchestrator | [WARNING]: Skipped 2026-04-06 06:53:16.052595 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052602 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-06 06:53:16.052609 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052617 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-06 06:53:16.052624 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-06 06:53:16.052631 | orchestrator | [WARNING]: Skipped 2026-04-06 06:53:16.052638 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052646 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-06 06:53:16.052655 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052663 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-06 06:53:16.052677 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-06 06:53:16.052690 | orchestrator | [WARNING]: Skipped 2026-04-06 06:53:16.052699 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052707 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-06 06:53:16.052715 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052724 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-06 06:53:16.052733 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-06 06:53:16.052741 | orchestrator | [WARNING]: Skipped 2026-04-06 06:53:16.052750 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052759 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-06 06:53:16.052767 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-06 06:53:16.052775 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-06 06:53:16.052782 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-06 06:53:16.052789 | orchestrator | 2026-04-06 06:53:16.052796 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-06 06:53:16.052803 | orchestrator | Monday 06 April 2026 06:52:52 +0000 (0:00:03.130) 0:00:52.945 ********** 2026-04-06 06:53:16.052810 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 06:53:16.052818 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:16.052825 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 06:53:16.052832 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:16.052839 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 06:53:16.052846 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:16.052854 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 06:53:16.052861 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:16.052868 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 06:53:16.052875 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:16.052882 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-06 06:53:16.052889 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:16.052896 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-06 06:53:16.052904 | orchestrator | 2026-04-06 06:53:16.052911 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-06 06:53:16.052918 | orchestrator | Monday 06 April 2026 06:53:10 +0000 (0:00:18.895) 0:01:11.841 ********** 2026-04-06 06:53:16.052925 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 06:53:16.052932 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:16.052939 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 06:53:16.052946 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:16.052953 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 06:53:16.052960 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:16.052968 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 06:53:16.052975 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:16.052982 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 06:53:16.052989 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:16.052996 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-06 06:53:16.053008 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:16.053015 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-06 06:53:16.053022 | orchestrator | 2026-04-06 06:53:16.053030 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-06 06:53:16.053037 | orchestrator | Monday 06 April 2026 06:53:15 +0000 (0:00:04.561) 0:01:16.403 ********** 2026-04-06 06:53:16.053048 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 06:53:56.658543 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:56.658696 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 06:53:56.658720 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:56.658732 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 06:53:56.658744 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.658755 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 06:53:56.658767 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:56.658778 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-06 06:53:56.658814 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 06:53:56.658833 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.658852 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-06 06:53:56.658870 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.658882 | orchestrator | 2026-04-06 06:53:56.658895 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-06 06:53:56.658907 | orchestrator | Monday 06 April 2026 06:53:18 +0000 (0:00:02.974) 0:01:19.377 ********** 2026-04-06 06:53:56.658918 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 06:53:56.658929 | orchestrator | 2026-04-06 06:53:56.658940 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-06 06:53:56.658952 | orchestrator | Monday 06 April 2026 06:53:20 +0000 (0:00:01.770) 0:01:21.147 ********** 2026-04-06 06:53:56.658970 | orchestrator | skipping: [testbed-manager] 2026-04-06 06:53:56.658989 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:56.659008 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:56.659027 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:56.659046 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.659121 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.659140 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.659159 | orchestrator | 2026-04-06 06:53:56.659179 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-06 06:53:56.659201 | orchestrator | Monday 06 April 2026 06:53:22 +0000 (0:00:01.938) 0:01:23.086 ********** 2026-04-06 06:53:56.659219 | orchestrator | skipping: [testbed-manager] 2026-04-06 06:53:56.659235 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.659246 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.659257 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.659268 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:53:56.659280 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:53:56.659291 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:53:56.659302 | orchestrator | 2026-04-06 06:53:56.659313 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-06 06:53:56.659324 | orchestrator | Monday 06 April 2026 06:53:25 +0000 (0:00:03.518) 0:01:26.605 ********** 2026-04-06 06:53:56.659367 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 06:53:56.659386 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 06:53:56.659414 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 06:53:56.659435 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 06:53:56.659452 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:56.659468 | orchestrator | skipping: [testbed-manager] 2026-04-06 06:53:56.659484 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:56.659501 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:56.659520 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 06:53:56.659539 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.659558 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 06:53:56.659569 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.659580 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-06 06:53:56.659591 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.659602 | orchestrator | 2026-04-06 06:53:56.659614 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-06 06:53:56.659625 | orchestrator | Monday 06 April 2026 06:53:28 +0000 (0:00:02.817) 0:01:29.423 ********** 2026-04-06 06:53:56.659636 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 06:53:56.659647 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:56.659658 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 06:53:56.659669 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:56.659680 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 06:53:56.659691 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:56.659702 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-06 06:53:56.659736 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 06:53:56.659748 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 06:53:56.659759 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.659770 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.659781 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-06 06:53:56.659791 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.659802 | orchestrator | 2026-04-06 06:53:56.659813 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-06 06:53:56.659824 | orchestrator | Monday 06 April 2026 06:53:31 +0000 (0:00:03.068) 0:01:32.491 ********** 2026-04-06 06:53:56.659835 | orchestrator | [WARNING]: Skipped 2026-04-06 06:53:56.659846 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-06 06:53:56.659857 | orchestrator | due to this access issue: 2026-04-06 06:53:56.659867 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-06 06:53:56.659878 | orchestrator | not a directory 2026-04-06 06:53:56.659898 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-06 06:53:56.659909 | orchestrator | 2026-04-06 06:53:56.659920 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-06 06:53:56.659931 | orchestrator | Monday 06 April 2026 06:53:34 +0000 (0:00:02.440) 0:01:34.932 ********** 2026-04-06 06:53:56.659942 | orchestrator | skipping: [testbed-manager] 2026-04-06 06:53:56.659965 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:56.659976 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:56.659987 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:56.659998 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.660026 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.660048 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.660085 | orchestrator | 2026-04-06 06:53:56.660097 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-06 06:53:56.660108 | orchestrator | Monday 06 April 2026 06:53:35 +0000 (0:00:01.933) 0:01:36.865 ********** 2026-04-06 06:53:56.660119 | orchestrator | skipping: [testbed-manager] 2026-04-06 06:53:56.660130 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:56.660141 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:56.660152 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:56.660163 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.660174 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.660184 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.660195 | orchestrator | 2026-04-06 06:53:56.660206 | orchestrator | TASK [prometheus : Check for the existence of Prometheus v2 container volume] *** 2026-04-06 06:53:56.660217 | orchestrator | Monday 06 April 2026 06:53:38 +0000 (0:00:02.425) 0:01:39.291 ********** 2026-04-06 06:53:56.660228 | orchestrator | ok: [testbed-manager] 2026-04-06 06:53:56.660239 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:53:56.660250 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:53:56.660261 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:53:56.660271 | orchestrator | ok: [testbed-node-3] 2026-04-06 06:53:56.660282 | orchestrator | ok: [testbed-node-4] 2026-04-06 06:53:56.660293 | orchestrator | ok: [testbed-node-5] 2026-04-06 06:53:56.660304 | orchestrator | 2026-04-06 06:53:56.660315 | orchestrator | TASK [prometheus : Gracefully stop Prometheus] ********************************* 2026-04-06 06:53:56.660326 | orchestrator | Monday 06 April 2026 06:53:40 +0000 (0:00:02.336) 0:01:41.628 ********** 2026-04-06 06:53:56.660337 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:56.660348 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:56.660359 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:56.660370 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.660401 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.660412 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.660422 | orchestrator | changed: [testbed-manager] 2026-04-06 06:53:56.660433 | orchestrator | 2026-04-06 06:53:56.660444 | orchestrator | TASK [prometheus : Create new Prometheus v3 volume] **************************** 2026-04-06 06:53:56.660455 | orchestrator | Monday 06 April 2026 06:53:48 +0000 (0:00:08.065) 0:01:49.693 ********** 2026-04-06 06:53:56.660465 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:56.660476 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:56.660486 | orchestrator | changed: [testbed-manager] 2026-04-06 06:53:56.660497 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:56.660507 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.660518 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.660529 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.660539 | orchestrator | 2026-04-06 06:53:56.660550 | orchestrator | TASK [prometheus : Move _data from old to new volume] ************************** 2026-04-06 06:53:56.660561 | orchestrator | Monday 06 April 2026 06:53:50 +0000 (0:00:02.229) 0:01:51.923 ********** 2026-04-06 06:53:56.660571 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:56.660582 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:56.660593 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:56.660603 | orchestrator | changed: [testbed-manager] 2026-04-06 06:53:56.660614 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.660624 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.660635 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.660646 | orchestrator | 2026-04-06 06:53:56.660656 | orchestrator | TASK [prometheus : Remove old Prometheus v2 volume] **************************** 2026-04-06 06:53:56.660675 | orchestrator | Monday 06 April 2026 06:53:53 +0000 (0:00:02.178) 0:01:54.101 ********** 2026-04-06 06:53:56.660686 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:53:56.660696 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:53:56.660707 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:53:56.660717 | orchestrator | changed: [testbed-manager] 2026-04-06 06:53:56.660728 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:53:56.660739 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:53:56.660749 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:53:56.660760 | orchestrator | 2026-04-06 06:53:56.660771 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-06 06:53:56.660781 | orchestrator | Monday 06 April 2026 06:53:55 +0000 (0:00:02.420) 0:01:56.522 ********** 2026-04-06 06:53:56.660815 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-06 06:53:58.441271 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:53:58.441369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:53:58.441385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:53:58.441397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:53:58.441433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:53:58.441445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:53:58.441456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-06 06:53:58.441500 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:53:58.441514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:53:58.441526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:53:58.441537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:53:58.441557 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:53:58.441570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:53:58.441582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:53:58.441610 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:54:04.596328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:54:04.596402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:54:04.596409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:54:04.596429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-06 06:54:04.596433 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:54:04.596439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:54:04.596453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:54:04.596468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:54:04.596474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:54:04.596478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-06 06:54:04.596488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:54:04.596493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:54:04.596497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-06 06:54:04.596501 | orchestrator | 2026-04-06 06:54:04.596507 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-06 06:54:04.596511 | orchestrator | Monday 06 April 2026 06:54:01 +0000 (0:00:06.377) 0:02:02.900 ********** 2026-04-06 06:54:04.596516 | orchestrator | changed: [testbed-manager] => { 2026-04-06 06:54:04.596521 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:54:04.596525 | orchestrator | } 2026-04-06 06:54:04.596529 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:54:04.596533 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:54:04.596536 | orchestrator | } 2026-04-06 06:54:04.596540 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:54:04.596544 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:54:04.596548 | orchestrator | } 2026-04-06 06:54:04.596551 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:54:04.596558 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:54:04.596562 | orchestrator | } 2026-04-06 06:54:04.596566 | orchestrator | changed: [testbed-node-3] => { 2026-04-06 06:54:04.596570 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:54:04.596574 | orchestrator | } 2026-04-06 06:54:04.596577 | orchestrator | changed: [testbed-node-4] => { 2026-04-06 06:54:04.596581 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:54:04.596585 | orchestrator | } 2026-04-06 06:54:04.596589 | orchestrator | changed: [testbed-node-5] => { 2026-04-06 06:54:04.596593 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:54:04.596596 | orchestrator | } 2026-04-06 06:54:04.596600 | orchestrator | 2026-04-06 06:54:04.596604 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:54:04.596608 | orchestrator | Monday 06 April 2026 06:54:04 +0000 (0:00:02.105) 0:02:05.006 ********** 2026-04-06 06:54:04.596618 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-06 06:54:04.897910 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:54:04.898208 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:54:04.898237 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:54:04.898271 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:54:04.898285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:54:04.898321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:54:04.898354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:54:04.898369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:54:04.898383 | orchestrator | skipping: [testbed-manager] 2026-04-06 06:54:04.898398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:54:04.898413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:54:04.898427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:54:04.898446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:54:04.898467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:54:04.898481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:54:04.898501 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:54:08.110245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:54:08.110374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:54:08.110403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:54:08.110423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:54:08.110465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-06 06:54:08.110523 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:54:08.110547 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:54:08.110564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:54:08.110576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:54:08.110609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 06:54:08.110621 | orchestrator | skipping: [testbed-node-3] 2026-04-06 06:54:08.110633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:54:08.110644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:54:08.110656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 06:54:08.110667 | orchestrator | skipping: [testbed-node-4] 2026-04-06 06:54:08.110686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-06 06:54:08.110710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-06 06:54:08.110723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-06 06:54:08.110736 | orchestrator | skipping: [testbed-node-5] 2026-04-06 06:54:08.110750 | orchestrator | 2026-04-06 06:54:08.110764 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 06:54:08.110778 | orchestrator | Monday 06 April 2026 06:54:07 +0000 (0:00:03.146) 0:02:08.153 ********** 2026-04-06 06:54:08.110791 | orchestrator | 2026-04-06 06:54:08.110804 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 06:54:08.110817 | orchestrator | Monday 06 April 2026 06:54:07 +0000 (0:00:00.444) 0:02:08.597 ********** 2026-04-06 06:54:08.110831 | orchestrator | 2026-04-06 06:54:08.110844 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 06:54:08.110864 | orchestrator | Monday 06 April 2026 06:54:08 +0000 (0:00:00.441) 0:02:09.038 ********** 2026-04-06 06:56:31.294449 | orchestrator | 2026-04-06 06:56:31.294567 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 06:56:31.294584 | orchestrator | Monday 06 April 2026 06:54:08 +0000 (0:00:00.439) 0:02:09.477 ********** 2026-04-06 06:56:31.294596 | orchestrator | 2026-04-06 06:56:31.294607 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 06:56:31.294618 | orchestrator | Monday 06 April 2026 06:54:09 +0000 (0:00:00.464) 0:02:09.942 ********** 2026-04-06 06:56:31.294629 | orchestrator | 2026-04-06 06:56:31.294640 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 06:56:31.294651 | orchestrator | Monday 06 April 2026 06:54:09 +0000 (0:00:00.464) 0:02:10.406 ********** 2026-04-06 06:56:31.294662 | orchestrator | 2026-04-06 06:56:31.294673 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-06 06:56:31.294684 | orchestrator | Monday 06 April 2026 06:54:10 +0000 (0:00:00.686) 0:02:11.093 ********** 2026-04-06 06:56:31.294695 | orchestrator | 2026-04-06 06:56:31.294705 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-06 06:56:31.294716 | orchestrator | Monday 06 April 2026 06:54:10 +0000 (0:00:00.825) 0:02:11.919 ********** 2026-04-06 06:56:31.294727 | orchestrator | changed: [testbed-manager] 2026-04-06 06:56:31.294739 | orchestrator | 2026-04-06 06:56:31.294749 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-06 06:56:31.294760 | orchestrator | Monday 06 April 2026 06:54:35 +0000 (0:00:24.407) 0:02:36.327 ********** 2026-04-06 06:56:31.294771 | orchestrator | changed: [testbed-manager] 2026-04-06 06:56:31.294782 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:56:31.294793 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:56:31.294827 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:56:31.294841 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:56:31.294859 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:56:31.294878 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:56:31.294896 | orchestrator | 2026-04-06 06:56:31.294914 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-06 06:56:31.294932 | orchestrator | Monday 06 April 2026 06:54:53 +0000 (0:00:18.436) 0:02:54.763 ********** 2026-04-06 06:56:31.294950 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:56:31.294968 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:56:31.294984 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:56:31.295002 | orchestrator | 2026-04-06 06:56:31.295021 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-06 06:56:31.295076 | orchestrator | Monday 06 April 2026 06:55:07 +0000 (0:00:13.241) 0:03:08.005 ********** 2026-04-06 06:56:31.295093 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:56:31.295112 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:56:31.295130 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:56:31.295150 | orchestrator | 2026-04-06 06:56:31.295170 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-06 06:56:31.295190 | orchestrator | Monday 06 April 2026 06:55:19 +0000 (0:00:12.498) 0:03:20.504 ********** 2026-04-06 06:56:31.295208 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:56:31.295226 | orchestrator | changed: [testbed-manager] 2026-04-06 06:56:31.295244 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:56:31.295263 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:56:31.295283 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:56:31.295295 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:56:31.295314 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:56:31.295332 | orchestrator | 2026-04-06 06:56:31.295350 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-06 06:56:31.295388 | orchestrator | Monday 06 April 2026 06:55:37 +0000 (0:00:17.861) 0:03:38.366 ********** 2026-04-06 06:56:31.295409 | orchestrator | changed: [testbed-manager] 2026-04-06 06:56:31.295427 | orchestrator | 2026-04-06 06:56:31.295445 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-06 06:56:31.295465 | orchestrator | Monday 06 April 2026 06:55:52 +0000 (0:00:15.214) 0:03:53.581 ********** 2026-04-06 06:56:31.295484 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:56:31.295502 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:56:31.295521 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:56:31.295540 | orchestrator | 2026-04-06 06:56:31.295558 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-06 06:56:31.295577 | orchestrator | Monday 06 April 2026 06:56:05 +0000 (0:00:13.030) 0:04:06.611 ********** 2026-04-06 06:56:31.295596 | orchestrator | changed: [testbed-manager] 2026-04-06 06:56:31.295614 | orchestrator | 2026-04-06 06:56:31.295633 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-06 06:56:31.295652 | orchestrator | Monday 06 April 2026 06:56:17 +0000 (0:00:12.248) 0:04:18.860 ********** 2026-04-06 06:56:31.295670 | orchestrator | changed: [testbed-node-3] 2026-04-06 06:56:31.295689 | orchestrator | changed: [testbed-node-4] 2026-04-06 06:56:31.295708 | orchestrator | changed: [testbed-node-5] 2026-04-06 06:56:31.295726 | orchestrator | 2026-04-06 06:56:31.295744 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:56:31.295764 | orchestrator | testbed-manager : ok=28  changed=14  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-06 06:56:31.295785 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-06 06:56:31.295803 | orchestrator | testbed-node-1 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-06 06:56:31.295837 | orchestrator | testbed-node-2 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-06 06:56:31.295880 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 06:56:31.295900 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 06:56:31.295918 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 06:56:31.295938 | orchestrator | 2026-04-06 06:56:31.295957 | orchestrator | 2026-04-06 06:56:31.295976 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:56:31.295994 | orchestrator | Monday 06 April 2026 06:56:30 +0000 (0:00:12.964) 0:04:31.825 ********** 2026-04-06 06:56:31.296012 | orchestrator | =============================================================================== 2026-04-06 06:56:31.296056 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 24.41s 2026-04-06 06:56:31.296075 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.90s 2026-04-06 06:56:31.296095 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 18.44s 2026-04-06 06:56:31.296106 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.86s 2026-04-06 06:56:31.296117 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 15.21s 2026-04-06 06:56:31.296128 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.24s 2026-04-06 06:56:31.296139 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 13.03s 2026-04-06 06:56:31.296152 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.96s 2026-04-06 06:56:31.296171 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.50s 2026-04-06 06:56:31.296189 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 12.25s 2026-04-06 06:56:31.296208 | orchestrator | prometheus : Copying over config.json files ----------------------------- 8.59s 2026-04-06 06:56:31.296227 | orchestrator | prometheus : Gracefully stop Prometheus --------------------------------- 8.07s 2026-04-06 06:56:31.296246 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 6.38s 2026-04-06 06:56:31.296266 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.14s 2026-04-06 06:56:31.296278 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 5.47s 2026-04-06 06:56:31.296288 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.56s 2026-04-06 06:56:31.296299 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 4.40s 2026-04-06 06:56:31.296310 | orchestrator | prometheus : include_tasks ---------------------------------------------- 4.14s 2026-04-06 06:56:31.296321 | orchestrator | prometheus : Flush handlers --------------------------------------------- 3.77s 2026-04-06 06:56:31.296332 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 3.58s 2026-04-06 06:56:32.807409 | orchestrator | 2026-04-06 06:56:32 | INFO  | Prepare task for execution of grafana. 2026-04-06 06:56:32.872162 | orchestrator | 2026-04-06 06:56:32 | INFO  | Task ff51ba8f-7b2b-4f5d-9971-3aed5a50c068 (grafana) was prepared for execution. 2026-04-06 06:56:32.872247 | orchestrator | 2026-04-06 06:56:32 | INFO  | It takes a moment until task ff51ba8f-7b2b-4f5d-9971-3aed5a50c068 (grafana) has been started and output is visible here. 2026-04-06 06:56:57.080856 | orchestrator | 2026-04-06 06:56:57.080967 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 06:56:57.080983 | orchestrator | 2026-04-06 06:56:57.080994 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 06:56:57.081080 | orchestrator | Monday 06 April 2026 06:56:38 +0000 (0:00:01.800) 0:00:01.800 ********** 2026-04-06 06:56:57.081093 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:56:57.081105 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:56:57.081117 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:56:57.081127 | orchestrator | 2026-04-06 06:56:57.081158 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 06:56:57.081170 | orchestrator | Monday 06 April 2026 06:56:39 +0000 (0:00:01.685) 0:00:03.486 ********** 2026-04-06 06:56:57.081181 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-06 06:56:57.081192 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-06 06:56:57.081203 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-06 06:56:57.081214 | orchestrator | 2026-04-06 06:56:57.081225 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-06 06:56:57.081236 | orchestrator | 2026-04-06 06:56:57.081247 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-06 06:56:57.081257 | orchestrator | Monday 06 April 2026 06:56:41 +0000 (0:00:01.589) 0:00:05.076 ********** 2026-04-06 06:56:57.081269 | orchestrator | included: /ansible/roles/grafana/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:56:57.081280 | orchestrator | 2026-04-06 06:56:57.081291 | orchestrator | TASK [grafana : Checking if Grafana container needs upgrading] ***************** 2026-04-06 06:56:57.081302 | orchestrator | Monday 06 April 2026 06:56:45 +0000 (0:00:03.969) 0:00:09.046 ********** 2026-04-06 06:56:57.081313 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:56:57.081324 | orchestrator | ok: [testbed-node-1] 2026-04-06 06:56:57.081334 | orchestrator | ok: [testbed-node-2] 2026-04-06 06:56:57.081345 | orchestrator | 2026-04-06 06:56:57.081356 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-06 06:56:57.081366 | orchestrator | Monday 06 April 2026 06:56:48 +0000 (0:00:03.068) 0:00:12.115 ********** 2026-04-06 06:56:57.081380 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:56:57.081399 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:56:57.081413 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:56:57.081435 | orchestrator | 2026-04-06 06:56:57.081449 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-06 06:56:57.081481 | orchestrator | Monday 06 April 2026 06:56:50 +0000 (0:00:01.755) 0:00:13.870 ********** 2026-04-06 06:56:57.081492 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:56:57.081504 | orchestrator | 2026-04-06 06:56:57.081515 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-06 06:56:57.081544 | orchestrator | Monday 06 April 2026 06:56:52 +0000 (0:00:02.259) 0:00:16.129 ********** 2026-04-06 06:56:57.081556 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 06:56:57.081567 | orchestrator | 2026-04-06 06:56:57.081578 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-06 06:56:57.081588 | orchestrator | Monday 06 April 2026 06:56:54 +0000 (0:00:02.031) 0:00:18.161 ********** 2026-04-06 06:56:57.081600 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:56:57.081612 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:56:57.081623 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:56:57.081635 | orchestrator | 2026-04-06 06:56:57.081645 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-06 06:56:57.081656 | orchestrator | Monday 06 April 2026 06:56:56 +0000 (0:00:02.316) 0:00:20.478 ********** 2026-04-06 06:56:57.081668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:56:57.081687 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:56:57.081712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:57:03.865409 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:57:03.865546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:57:03.865568 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:57:03.865580 | orchestrator | 2026-04-06 06:57:03.865591 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-06 06:57:03.865602 | orchestrator | Monday 06 April 2026 06:56:58 +0000 (0:00:01.483) 0:00:21.961 ********** 2026-04-06 06:57:03.865613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:57:03.865624 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:57:03.865634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:57:03.865670 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:57:03.865681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:57:03.865691 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:57:03.865701 | orchestrator | 2026-04-06 06:57:03.865711 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-06 06:57:03.865735 | orchestrator | Monday 06 April 2026 06:56:59 +0000 (0:00:01.702) 0:00:23.664 ********** 2026-04-06 06:57:03.865765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:57:03.865777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:57:03.865788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:57:03.865798 | orchestrator | 2026-04-06 06:57:03.865808 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-06 06:57:03.865828 | orchestrator | Monday 06 April 2026 06:57:02 +0000 (0:00:02.364) 0:00:26.029 ********** 2026-04-06 06:57:03.865838 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:57:03.865849 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:57:03.865872 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:57:31.182819 | orchestrator | 2026-04-06 06:57:31.182930 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-06 06:57:31.182946 | orchestrator | Monday 06 April 2026 06:57:04 +0000 (0:00:02.632) 0:00:28.662 ********** 2026-04-06 06:57:31.182959 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:57:31.182971 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:57:31.182982 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:57:31.182993 | orchestrator | 2026-04-06 06:57:31.183004 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-06 06:57:31.183015 | orchestrator | Monday 06 April 2026 06:57:06 +0000 (0:00:01.398) 0:00:30.060 ********** 2026-04-06 06:57:31.183078 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-06 06:57:31.183090 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-06 06:57:31.183101 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-06 06:57:31.183112 | orchestrator | 2026-04-06 06:57:31.183123 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-06 06:57:31.183134 | orchestrator | Monday 06 April 2026 06:57:08 +0000 (0:00:02.282) 0:00:32.343 ********** 2026-04-06 06:57:31.183146 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-06 06:57:31.183158 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-06 06:57:31.183197 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-06 06:57:31.183209 | orchestrator | 2026-04-06 06:57:31.183220 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-06 06:57:31.183231 | orchestrator | Monday 06 April 2026 06:57:11 +0000 (0:00:03.202) 0:00:35.546 ********** 2026-04-06 06:57:31.183242 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 06:57:31.183252 | orchestrator | 2026-04-06 06:57:31.183263 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-06 06:57:31.183274 | orchestrator | Monday 06 April 2026 06:57:13 +0000 (0:00:01.850) 0:00:37.396 ********** 2026-04-06 06:57:31.183285 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:57:31.183296 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:57:31.183307 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:57:31.183318 | orchestrator | 2026-04-06 06:57:31.183329 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-06 06:57:31.183339 | orchestrator | Monday 06 April 2026 06:57:15 +0000 (0:00:01.965) 0:00:39.362 ********** 2026-04-06 06:57:31.183350 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:57:31.183364 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:57:31.183376 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:57:31.183390 | orchestrator | 2026-04-06 06:57:31.183402 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-06 06:57:31.183415 | orchestrator | Monday 06 April 2026 06:57:18 +0000 (0:00:02.675) 0:00:42.038 ********** 2026-04-06 06:57:31.183431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:57:31.183462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:57:31.183495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 06:57:31.183515 | orchestrator | 2026-04-06 06:57:31.183527 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-06 06:57:31.183538 | orchestrator | Monday 06 April 2026 06:57:20 +0000 (0:00:02.290) 0:00:44.328 ********** 2026-04-06 06:57:31.183549 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 06:57:31.183560 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:57:31.183571 | orchestrator | } 2026-04-06 06:57:31.183582 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 06:57:31.183593 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:57:31.183603 | orchestrator | } 2026-04-06 06:57:31.183614 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 06:57:31.183625 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 06:57:31.183635 | orchestrator | } 2026-04-06 06:57:31.183646 | orchestrator | 2026-04-06 06:57:31.183657 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 06:57:31.183668 | orchestrator | Monday 06 April 2026 06:57:22 +0000 (0:00:01.381) 0:00:45.710 ********** 2026-04-06 06:57:31.183708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:57:31.183733 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:57:31.183769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:57:31.183781 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:57:31.183799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 06:57:31.183810 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:57:31.183821 | orchestrator | 2026-04-06 06:57:31.183832 | orchestrator | TASK [grafana : Stopping all Grafana instances but the first node] ************* 2026-04-06 06:57:31.183843 | orchestrator | Monday 06 April 2026 06:57:23 +0000 (0:00:01.442) 0:00:47.153 ********** 2026-04-06 06:57:31.183854 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:57:31.183865 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:57:31.183876 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:57:31.183893 | orchestrator | 2026-04-06 06:57:31.183904 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-06 06:57:31.183915 | orchestrator | Monday 06 April 2026 06:57:30 +0000 (0:00:06.989) 0:00:54.142 ********** 2026-04-06 06:57:31.183926 | orchestrator | 2026-04-06 06:57:31.183937 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-06 06:57:31.183948 | orchestrator | Monday 06 April 2026 06:57:30 +0000 (0:00:00.455) 0:00:54.598 ********** 2026-04-06 06:57:31.183959 | orchestrator | 2026-04-06 06:57:31.183977 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-06 06:59:15.792448 | orchestrator | Monday 06 April 2026 06:57:31 +0000 (0:00:00.605) 0:00:55.204 ********** 2026-04-06 06:59:15.792595 | orchestrator | 2026-04-06 06:59:15.792615 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-06 06:59:15.792627 | orchestrator | Monday 06 April 2026 06:57:32 +0000 (0:00:00.788) 0:00:55.993 ********** 2026-04-06 06:59:15.792639 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:59:15.792651 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:59:15.792663 | orchestrator | changed: [testbed-node-0] 2026-04-06 06:59:15.792674 | orchestrator | 2026-04-06 06:59:15.792685 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-06 06:59:15.792696 | orchestrator | Monday 06 April 2026 06:58:11 +0000 (0:00:38.834) 0:01:34.828 ********** 2026-04-06 06:59:15.792707 | orchestrator | skipping: [testbed-node-1] 2026-04-06 06:59:15.792718 | orchestrator | skipping: [testbed-node-2] 2026-04-06 06:59:15.792729 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-06 06:59:15.792741 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-06 06:59:15.792752 | orchestrator | ok: [testbed-node-0] 2026-04-06 06:59:15.792764 | orchestrator | 2026-04-06 06:59:15.792775 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-06 06:59:15.792786 | orchestrator | Monday 06 April 2026 06:58:38 +0000 (0:00:27.338) 0:02:02.166 ********** 2026-04-06 06:59:15.792797 | orchestrator | skipping: [testbed-node-0] 2026-04-06 06:59:15.792808 | orchestrator | changed: [testbed-node-2] 2026-04-06 06:59:15.792819 | orchestrator | changed: [testbed-node-1] 2026-04-06 06:59:15.792830 | orchestrator | 2026-04-06 06:59:15.792841 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 06:59:15.792853 | orchestrator | testbed-node-0 : ok=19  changed=6  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 06:59:15.792867 | orchestrator | testbed-node-1 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 06:59:15.792878 | orchestrator | testbed-node-2 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 06:59:15.792889 | orchestrator | 2026-04-06 06:59:15.792900 | orchestrator | 2026-04-06 06:59:15.792911 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 06:59:15.792922 | orchestrator | Monday 06 April 2026 06:59:15 +0000 (0:00:36.997) 0:02:39.164 ********** 2026-04-06 06:59:15.792933 | orchestrator | =============================================================================== 2026-04-06 06:59:15.792944 | orchestrator | grafana : Restart first grafana container ------------------------------ 38.83s 2026-04-06 06:59:15.792955 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.99s 2026-04-06 06:59:15.792966 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.34s 2026-04-06 06:59:15.792977 | orchestrator | grafana : Stopping all Grafana instances but the first node ------------- 6.99s 2026-04-06 06:59:15.792991 | orchestrator | grafana : include_tasks ------------------------------------------------- 3.97s 2026-04-06 06:59:15.793028 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 3.20s 2026-04-06 06:59:15.793072 | orchestrator | grafana : Checking if Grafana container needs upgrading ----------------- 3.07s 2026-04-06 06:59:15.793085 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 2.68s 2026-04-06 06:59:15.793098 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 2.63s 2026-04-06 06:59:15.793110 | orchestrator | grafana : Copying over config.json files -------------------------------- 2.36s 2026-04-06 06:59:15.793123 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.32s 2026-04-06 06:59:15.793135 | orchestrator | service-check-containers : grafana | Check containers ------------------- 2.29s 2026-04-06 06:59:15.793147 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 2.28s 2026-04-06 06:59:15.793160 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 2.26s 2026-04-06 06:59:15.793173 | orchestrator | grafana : include_tasks ------------------------------------------------- 2.03s 2026-04-06 06:59:15.793186 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 1.97s 2026-04-06 06:59:15.793215 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 1.85s 2026-04-06 06:59:15.793229 | orchestrator | grafana : Flush handlers ------------------------------------------------ 1.85s 2026-04-06 06:59:15.793241 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.76s 2026-04-06 06:59:15.793254 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.70s 2026-04-06 06:59:15.979535 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/510-clusterapi.sh 2026-04-06 06:59:15.988558 | orchestrator | + set -e 2026-04-06 06:59:15.988641 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 06:59:15.988658 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 06:59:15.988669 | orchestrator | ++ INTERACTIVE=false 2026-04-06 06:59:15.988681 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 06:59:15.988692 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 06:59:15.988712 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-06 06:59:15.990320 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-06 06:59:15.996647 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-06 06:59:15.996710 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-06 06:59:15.997783 | orchestrator | ++ semver 10.0.0 8.0.0 2026-04-06 06:59:16.064588 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-06 06:59:16.064678 | orchestrator | + osism apply clusterapi 2026-04-06 06:59:17.397392 | orchestrator | 2026-04-06 06:59:17 | INFO  | Prepare task for execution of clusterapi. 2026-04-06 06:59:17.464479 | orchestrator | 2026-04-06 06:59:17 | INFO  | Task 8459e740-20d4-4f58-86a8-7947be378142 (clusterapi) was prepared for execution. 2026-04-06 06:59:17.464561 | orchestrator | 2026-04-06 06:59:17 | INFO  | It takes a moment until task 8459e740-20d4-4f58-86a8-7947be378142 (clusterapi) has been started and output is visible here. 2026-04-06 07:00:13.023414 | orchestrator | 2026-04-06 07:00:13.023524 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-06 07:00:13.023547 | orchestrator | 2026-04-06 07:00:13.023564 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-06 07:00:13.023579 | orchestrator | Monday 06 April 2026 06:59:22 +0000 (0:00:01.471) 0:00:01.471 ********** 2026-04-06 07:00:13.023594 | orchestrator | included: cert_manager for testbed-manager 2026-04-06 07:00:13.023610 | orchestrator | 2026-04-06 07:00:13.023626 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-06 07:00:13.023642 | orchestrator | Monday 06 April 2026 06:59:24 +0000 (0:00:01.841) 0:00:03.313 ********** 2026-04-06 07:00:13.023657 | orchestrator | ok: [testbed-manager] 2026-04-06 07:00:13.023672 | orchestrator | 2026-04-06 07:00:13.023687 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-06 07:00:13.023702 | orchestrator | Monday 06 April 2026 06:59:28 +0000 (0:00:04.651) 0:00:07.964 ********** 2026-04-06 07:00:13.023717 | orchestrator | ok: [testbed-manager] 2026-04-06 07:00:13.023759 | orchestrator | 2026-04-06 07:00:13.023775 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-06 07:00:13.023788 | orchestrator | 2026-04-06 07:00:13.023801 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-06 07:00:13.023815 | orchestrator | Monday 06 April 2026 06:59:33 +0000 (0:00:05.009) 0:00:12.974 ********** 2026-04-06 07:00:13.023830 | orchestrator | ok: [testbed-manager] 2026-04-06 07:00:13.023844 | orchestrator | 2026-04-06 07:00:13.023859 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-06 07:00:13.023873 | orchestrator | Monday 06 April 2026 06:59:36 +0000 (0:00:02.449) 0:00:15.424 ********** 2026-04-06 07:00:13.023887 | orchestrator | ok: [testbed-manager] 2026-04-06 07:00:13.023901 | orchestrator | 2026-04-06 07:00:13.023915 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-06 07:00:13.023929 | orchestrator | Monday 06 April 2026 06:59:37 +0000 (0:00:01.118) 0:00:16.543 ********** 2026-04-06 07:00:13.023945 | orchestrator | skipping: [testbed-manager] 2026-04-06 07:00:13.023960 | orchestrator | 2026-04-06 07:00:13.023976 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-06 07:00:13.023992 | orchestrator | Monday 06 April 2026 06:59:38 +0000 (0:00:01.129) 0:00:17.672 ********** 2026-04-06 07:00:13.024043 | orchestrator | ok: [testbed-manager] 2026-04-06 07:00:13.024058 | orchestrator | 2026-04-06 07:00:13.024074 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-06 07:00:13.024088 | orchestrator | Monday 06 April 2026 07:00:09 +0000 (0:00:30.835) 0:00:48.508 ********** 2026-04-06 07:00:13.024104 | orchestrator | changed: [testbed-manager] 2026-04-06 07:00:13.024120 | orchestrator | 2026-04-06 07:00:13.024135 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 07:00:13.024151 | orchestrator | testbed-manager : ok=7  changed=1  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-06 07:00:13.024167 | orchestrator | 2026-04-06 07:00:13.024183 | orchestrator | 2026-04-06 07:00:13.024197 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 07:00:13.024213 | orchestrator | Monday 06 April 2026 07:00:12 +0000 (0:00:03.375) 0:00:51.883 ********** 2026-04-06 07:00:13.024230 | orchestrator | =============================================================================== 2026-04-06 07:00:13.024245 | orchestrator | Upgrade the CAPI management cluster ------------------------------------ 30.84s 2026-04-06 07:00:13.024261 | orchestrator | cert_manager : Deploy cert-manager -------------------------------------- 5.01s 2026-04-06 07:00:13.024271 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 4.65s 2026-04-06 07:00:13.024282 | orchestrator | Install openstack-resource-controller ----------------------------------- 3.38s 2026-04-06 07:00:13.024292 | orchestrator | Get capi-system namespace phase ----------------------------------------- 2.45s 2026-04-06 07:00:13.024303 | orchestrator | Include cert_manager role ----------------------------------------------- 1.84s 2026-04-06 07:00:13.024314 | orchestrator | Initialize the CAPI management cluster ---------------------------------- 1.13s 2026-04-06 07:00:13.024325 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 1.12s 2026-04-06 07:00:13.149056 | orchestrator | + osism apply -a upgrade magnum 2026-04-06 07:00:14.399719 | orchestrator | 2026-04-06 07:00:14 | INFO  | Prepare task for execution of magnum. 2026-04-06 07:00:14.467566 | orchestrator | 2026-04-06 07:00:14 | INFO  | Task 035f43ca-366a-4ad3-a470-1d49e1eea007 (magnum) was prepared for execution. 2026-04-06 07:00:14.467821 | orchestrator | 2026-04-06 07:00:14 | INFO  | It takes a moment until task 035f43ca-366a-4ad3-a470-1d49e1eea007 (magnum) has been started and output is visible here. 2026-04-06 07:00:35.147137 | orchestrator | 2026-04-06 07:00:35.147238 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 07:00:35.147251 | orchestrator | 2026-04-06 07:00:35.147261 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 07:00:35.147293 | orchestrator | Monday 06 April 2026 07:00:19 +0000 (0:00:01.419) 0:00:01.419 ********** 2026-04-06 07:00:35.147303 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:00:35.147313 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:00:35.147321 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:00:35.147330 | orchestrator | 2026-04-06 07:00:35.147339 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 07:00:35.147348 | orchestrator | Monday 06 April 2026 07:00:20 +0000 (0:00:01.752) 0:00:03.172 ********** 2026-04-06 07:00:35.147357 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-06 07:00:35.147366 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-06 07:00:35.147375 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-06 07:00:35.147384 | orchestrator | 2026-04-06 07:00:35.147393 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-06 07:00:35.147402 | orchestrator | 2026-04-06 07:00:35.147411 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-06 07:00:35.147420 | orchestrator | Monday 06 April 2026 07:00:22 +0000 (0:00:01.678) 0:00:04.850 ********** 2026-04-06 07:00:35.147429 | orchestrator | included: /ansible/roles/magnum/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 07:00:35.147438 | orchestrator | 2026-04-06 07:00:35.147447 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-06 07:00:35.147456 | orchestrator | Monday 06 April 2026 07:00:25 +0000 (0:00:03.314) 0:00:08.165 ********** 2026-04-06 07:00:35.147472 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:35.147485 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:35.147526 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:35.147545 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:35.147557 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:35.147566 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:35.147575 | orchestrator | 2026-04-06 07:00:35.147585 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-06 07:00:35.147594 | orchestrator | Monday 06 April 2026 07:00:29 +0000 (0:00:03.146) 0:00:11.312 ********** 2026-04-06 07:00:35.147603 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:00:35.147616 | orchestrator | 2026-04-06 07:00:35.147631 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-06 07:00:35.147646 | orchestrator | Monday 06 April 2026 07:00:30 +0000 (0:00:01.137) 0:00:12.450 ********** 2026-04-06 07:00:35.147662 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:00:35.147677 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:00:35.147693 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:00:35.147708 | orchestrator | 2026-04-06 07:00:35.147724 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-06 07:00:35.147739 | orchestrator | Monday 06 April 2026 07:00:31 +0000 (0:00:01.366) 0:00:13.816 ********** 2026-04-06 07:00:35.147765 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-06 07:00:35.147781 | orchestrator | 2026-04-06 07:00:35.147796 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-06 07:00:35.147812 | orchestrator | Monday 06 April 2026 07:00:33 +0000 (0:00:02.254) 0:00:16.071 ********** 2026-04-06 07:00:35.147852 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:42.725377 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:42.725530 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:42.725551 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:42.725659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:42.725694 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:42.725706 | orchestrator | 2026-04-06 07:00:42.725718 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-06 07:00:42.725730 | orchestrator | Monday 06 April 2026 07:00:37 +0000 (0:00:03.564) 0:00:19.635 ********** 2026-04-06 07:00:42.725741 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:00:42.725751 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:00:42.725761 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:00:42.725771 | orchestrator | 2026-04-06 07:00:42.725781 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-06 07:00:42.725791 | orchestrator | Monday 06 April 2026 07:00:38 +0000 (0:00:01.354) 0:00:20.990 ********** 2026-04-06 07:00:42.725801 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 07:00:42.725811 | orchestrator | 2026-04-06 07:00:42.725821 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-06 07:00:42.725831 | orchestrator | Monday 06 April 2026 07:00:40 +0000 (0:00:01.947) 0:00:22.938 ********** 2026-04-06 07:00:42.725842 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:42.725854 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:42.725877 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:42.725902 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:46.253564 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:46.253672 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:46.253730 | orchestrator | 2026-04-06 07:00:46.253758 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-06 07:00:46.253778 | orchestrator | Monday 06 April 2026 07:00:44 +0000 (0:00:03.339) 0:00:26.277 ********** 2026-04-06 07:00:46.253838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:00:46.253867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:00:46.253882 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:00:46.253918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:00:46.253931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:00:46.253953 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:00:46.253965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:00:46.253983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:00:46.254112 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:00:46.254126 | orchestrator | 2026-04-06 07:00:46.254140 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-06 07:00:46.254154 | orchestrator | Monday 06 April 2026 07:00:45 +0000 (0:00:01.811) 0:00:28.089 ********** 2026-04-06 07:00:46.254177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:00:50.319637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:00:50.319760 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:00:50.319779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:00:50.319806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:00:50.319817 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:00:50.319829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:00:50.319850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:00:50.319857 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:00:50.319875 | orchestrator | 2026-04-06 07:00:50.319886 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-06 07:00:50.319897 | orchestrator | Monday 06 April 2026 07:00:47 +0000 (0:00:02.135) 0:00:30.225 ********** 2026-04-06 07:00:50.319908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:50.319924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:50.319932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:50.319950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:58.385047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:58.385156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:58.385172 | orchestrator | 2026-04-06 07:00:58.385185 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-06 07:00:58.385197 | orchestrator | Monday 06 April 2026 07:00:51 +0000 (0:00:03.502) 0:00:33.727 ********** 2026-04-06 07:00:58.385228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:58.385242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:58.385291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:00:58.385304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:58.385320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:58.385331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:00:58.385342 | orchestrator | 2026-04-06 07:00:58.385352 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-06 07:00:58.385362 | orchestrator | Monday 06 April 2026 07:00:58 +0000 (0:00:06.563) 0:00:40.291 ********** 2026-04-06 07:00:58.385380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:01:02.734822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:01:02.734915 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:01:02.734933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:01:02.734964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:01:02.734976 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:01:02.735040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:01:02.735092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:01:02.735104 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:01:02.735114 | orchestrator | 2026-04-06 07:01:02.735125 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-04-06 07:01:02.735136 | orchestrator | Monday 06 April 2026 07:01:00 +0000 (0:00:02.233) 0:00:42.525 ********** 2026-04-06 07:01:02.735172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:01:02.735192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:01:02.735204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-06 07:01:02.735231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:01:28.724718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:01:28.724837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-06 07:01:28.724852 | orchestrator | 2026-04-06 07:01:28.724873 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-04-06 07:01:28.724883 | orchestrator | Monday 06 April 2026 07:01:03 +0000 (0:00:03.663) 0:00:46.188 ********** 2026-04-06 07:01:28.724893 | orchestrator | changed: [testbed-node-0] => { 2026-04-06 07:01:28.724917 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 07:01:28.724926 | orchestrator | } 2026-04-06 07:01:28.724935 | orchestrator | changed: [testbed-node-1] => { 2026-04-06 07:01:28.724943 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 07:01:28.724951 | orchestrator | } 2026-04-06 07:01:28.724959 | orchestrator | changed: [testbed-node-2] => { 2026-04-06 07:01:28.724967 | orchestrator |  "msg": "Notifying handlers" 2026-04-06 07:01:28.724975 | orchestrator | } 2026-04-06 07:01:28.725018 | orchestrator | 2026-04-06 07:01:28.725027 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-06 07:01:28.725036 | orchestrator | Monday 06 April 2026 07:01:05 +0000 (0:00:01.418) 0:00:47.606 ********** 2026-04-06 07:01:28.725065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:01:28.725076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:01:28.725085 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:01:28.725109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:01:28.725120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:01:28.725129 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:01:28.725142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-06 07:01:28.725168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-06 07:01:28.725177 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:01:28.725185 | orchestrator | 2026-04-06 07:01:28.725194 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-06 07:01:28.725202 | orchestrator | Monday 06 April 2026 07:01:07 +0000 (0:00:02.119) 0:00:49.726 ********** 2026-04-06 07:01:28.725210 | orchestrator | changed: [testbed-node-0] 2026-04-06 07:01:28.725218 | orchestrator | 2026-04-06 07:01:28.725226 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-06 07:01:28.725234 | orchestrator | Monday 06 April 2026 07:01:28 +0000 (0:00:20.793) 0:01:10.519 ********** 2026-04-06 07:01:28.725242 | orchestrator | 2026-04-06 07:01:28.725250 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-06 07:01:28.725264 | orchestrator | Monday 06 April 2026 07:01:28 +0000 (0:00:00.449) 0:01:10.969 ********** 2026-04-06 07:02:17.159787 | orchestrator | 2026-04-06 07:02:17.159900 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-06 07:02:17.159917 | orchestrator | Monday 06 April 2026 07:01:29 +0000 (0:00:00.488) 0:01:11.457 ********** 2026-04-06 07:02:17.159930 | orchestrator | 2026-04-06 07:02:17.159941 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-06 07:02:17.159952 | orchestrator | Monday 06 April 2026 07:01:30 +0000 (0:00:00.803) 0:01:12.261 ********** 2026-04-06 07:02:17.159964 | orchestrator | changed: [testbed-node-0] 2026-04-06 07:02:17.160069 | orchestrator | changed: [testbed-node-1] 2026-04-06 07:02:17.160083 | orchestrator | changed: [testbed-node-2] 2026-04-06 07:02:17.160095 | orchestrator | 2026-04-06 07:02:17.160106 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-06 07:02:17.160117 | orchestrator | Monday 06 April 2026 07:01:52 +0000 (0:00:22.299) 0:01:34.560 ********** 2026-04-06 07:02:17.160128 | orchestrator | changed: [testbed-node-2] 2026-04-06 07:02:17.160139 | orchestrator | changed: [testbed-node-0] 2026-04-06 07:02:17.160150 | orchestrator | changed: [testbed-node-1] 2026-04-06 07:02:17.160161 | orchestrator | 2026-04-06 07:02:17.160187 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 07:02:17.160200 | orchestrator | testbed-node-0 : ok=16  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-06 07:02:17.160225 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 07:02:17.160266 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-06 07:02:17.160278 | orchestrator | 2026-04-06 07:02:17.160288 | orchestrator | 2026-04-06 07:02:17.160300 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 07:02:17.160311 | orchestrator | Monday 06 April 2026 07:02:16 +0000 (0:00:24.562) 0:01:59.122 ********** 2026-04-06 07:02:17.160322 | orchestrator | =============================================================================== 2026-04-06 07:02:17.160333 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 24.56s 2026-04-06 07:02:17.160346 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 22.30s 2026-04-06 07:02:17.160374 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 20.79s 2026-04-06 07:02:17.160388 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.56s 2026-04-06 07:02:17.160401 | orchestrator | service-check-containers : magnum | Check containers -------------------- 3.66s 2026-04-06 07:02:17.160414 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.56s 2026-04-06 07:02:17.160427 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.50s 2026-04-06 07:02:17.160440 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.34s 2026-04-06 07:02:17.160453 | orchestrator | magnum : include_tasks -------------------------------------------------- 3.32s 2026-04-06 07:02:17.160467 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 3.15s 2026-04-06 07:02:17.160480 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 2.25s 2026-04-06 07:02:17.160493 | orchestrator | magnum : Copying over existing policy file ------------------------------ 2.23s 2026-04-06 07:02:17.160506 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.14s 2026-04-06 07:02:17.160519 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.12s 2026-04-06 07:02:17.160532 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.95s 2026-04-06 07:02:17.160546 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 1.81s 2026-04-06 07:02:17.160560 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.75s 2026-04-06 07:02:17.160573 | orchestrator | magnum : Flush handlers ------------------------------------------------- 1.74s 2026-04-06 07:02:17.160586 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.68s 2026-04-06 07:02:17.160600 | orchestrator | service-check-containers : magnum | Notify handlers to restart containers --- 1.42s 2026-04-06 07:02:17.938416 | orchestrator | ok: Runtime: 2:44:06.327776 2026-04-06 07:02:18.357028 | 2026-04-06 07:02:18.357166 | TASK [Bootstrap services] 2026-04-06 07:02:18.892704 | orchestrator | skipping: Conditional result was False 2026-04-06 07:02:18.919421 | 2026-04-06 07:02:18.919586 | TASK [Run checks after the upgrade] 2026-04-06 07:02:19.613180 | orchestrator | + set -e 2026-04-06 07:02:19.613381 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 07:02:19.613405 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 07:02:19.613427 | orchestrator | ++ INTERACTIVE=false 2026-04-06 07:02:19.613441 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 07:02:19.613454 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 07:02:19.613469 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-06 07:02:19.614728 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-06 07:02:19.621241 | orchestrator | 2026-04-06 07:02:19.621304 | orchestrator | # CHECK 2026-04-06 07:02:19.621316 | orchestrator | 2026-04-06 07:02:19.621328 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-06 07:02:19.621344 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-06 07:02:19.621356 | orchestrator | + echo 2026-04-06 07:02:19.621367 | orchestrator | + echo '# CHECK' 2026-04-06 07:02:19.621377 | orchestrator | + echo 2026-04-06 07:02:19.621400 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-06 07:02:19.622343 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-06 07:02:19.681315 | orchestrator | 2026-04-06 07:02:19.681434 | orchestrator | ## Containers @ testbed-manager 2026-04-06 07:02:19.681461 | orchestrator | 2026-04-06 07:02:19.681483 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-06 07:02:19.681504 | orchestrator | + echo 2026-04-06 07:02:19.681526 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-06 07:02:19.681547 | orchestrator | + echo 2026-04-06 07:02:19.681567 | orchestrator | + osism container testbed-manager ps 2026-04-06 07:02:21.253422 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-06 07:02:21.253620 | orchestrator | 7c4ec2d7f9ef registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_blackbox_exporter 2026-04-06 07:02:21.253665 | orchestrator | 62aeb39e1230 registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_alertmanager 2026-04-06 07:02:21.253682 | orchestrator | a6e060066574 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-06 07:02:21.253699 | orchestrator | 3135d3fb792b registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-06 07:02:21.253715 | orchestrator | a9944bc08209 registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_server 2026-04-06 07:02:21.253731 | orchestrator | 44673de5f92d registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-06 07:02:21.253753 | orchestrator | 0ef57f325d63 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-06 07:02:21.253768 | orchestrator | de54809cf639 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-06 07:02:21.253809 | orchestrator | c3d743b779d3 registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 3 hours ago Up 3 hours openstackclient 2026-04-06 07:02:21.253826 | orchestrator | 5b030b9d9d1c registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" 3 hours ago Up 3 hours (healthy) manager-inventory_reconciler-1 2026-04-06 07:02:21.253842 | orchestrator | 3e974285ca4f registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) ceph-ansible 2026-04-06 07:02:21.253858 | orchestrator | 000107d11b64 registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-kubernetes 2026-04-06 07:02:21.253874 | orchestrator | 2f7960196707 registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) kolla-ansible 2026-04-06 07:02:21.253914 | orchestrator | 4a88415b0f16 registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-ansible 2026-04-06 07:02:21.253933 | orchestrator | 07ab5a52023f registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" 3 hours ago Up 3 hours (healthy) osismclient 2026-04-06 07:02:21.253949 | orchestrator | fe899ddc7e55 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-flower-1 2026-04-06 07:02:21.253965 | orchestrator | 665aad322a92 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up About an hour (healthy) manager-listener-1 2026-04-06 07:02:21.254053 | orchestrator | eb558fc0f692 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-06 07:02:21.254073 | orchestrator | 3412e424a06a registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-beat-1 2026-04-06 07:02:21.254088 | orchestrator | dbc9cf2f78e3 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-openstack-1 2026-04-06 07:02:21.254102 | orchestrator | f06832d46f98 registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" 3 hours ago Up 3 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-06 07:02:21.254116 | orchestrator | 76f4b9ecc788 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 4 hours ago Up 4 hours cephclient 2026-04-06 07:02:21.254141 | orchestrator | e9ce20ec447d phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 5 hours ago Up 5 hours (healthy) 80/tcp phpmyadmin 2026-04-06 07:02:21.254156 | orchestrator | cda30e19adea registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 5 hours ago Up 5 hours (healthy) 8080/tcp homer 2026-04-06 07:02:21.254171 | orchestrator | f8716af02d81 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 5 hours ago Up 5 hours 80/tcp cgit 2026-04-06 07:02:21.254185 | orchestrator | 52996163e063 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 5 hours ago Up 5 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-06 07:02:21.254205 | orchestrator | c4b5b426bab9 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 5 hours ago Up 3 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-06 07:02:21.254219 | orchestrator | 6488764b747c registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 5 hours ago Up 3 hours (healthy) 6379/tcp manager-redis-1 2026-04-06 07:02:21.254233 | orchestrator | 81e8d502a3cb registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 5 hours ago Up 3 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-06 07:02:21.254258 | orchestrator | 81ea1e4eb5b2 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 5 hours ago Up 5 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-06 07:02:21.407807 | orchestrator | 2026-04-06 07:02:21.407887 | orchestrator | ## Images @ testbed-manager 2026-04-06 07:02:21.407896 | orchestrator | 2026-04-06 07:02:21.407902 | orchestrator | + echo 2026-04-06 07:02:21.407909 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-06 07:02:21.407915 | orchestrator | + echo 2026-04-06 07:02:21.407921 | orchestrator | + osism container testbed-manager images 2026-04-06 07:02:22.881563 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-06 07:02:22.881695 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 439baeb0fe12 3 hours ago 213MB 2026-04-06 07:02:22.881712 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 0455d6e4cec5 27 hours ago 239MB 2026-04-06 07:02:22.881724 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20260328.0 38f6ca42e9a0 6 days ago 635MB 2026-04-06 07:02:22.881735 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 8 days ago 590MB 2026-04-06 07:02:22.881746 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 8 days ago 683MB 2026-04-06 07:02:22.881757 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 8 days ago 277MB 2026-04-06 07:02:22.881768 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter 0.25.0.20260328 1bf017fd7bf3 8 days ago 319MB 2026-04-06 07:02:22.881806 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager 0.28.1.20260328 d1986023a383 8 days ago 415MB 2026-04-06 07:02:22.881818 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 8 days ago 368MB 2026-04-06 07:02:22.881830 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-server 3.2.1.20260328 4f5732d5eb69 8 days ago 860MB 2026-04-06 07:02:22.881841 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 8 days ago 317MB 2026-04-06 07:02:22.881851 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20260322.0 3e18c5de9bc5 2 weeks ago 634MB 2026-04-06 07:02:22.881877 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20260322.0 c68c1f5728ae 2 weeks ago 1.24GB 2026-04-06 07:02:22.881889 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20260322.0 f6e7e0d58bb1 2 weeks ago 585MB 2026-04-06 07:02:22.881900 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20260322.0 9806642932fd 2 weeks ago 357MB 2026-04-06 07:02:22.881911 | orchestrator | registry.osism.tech/osism/osism 0.20260320.0 5d0420989a40 2 weeks ago 408MB 2026-04-06 07:02:22.881922 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20260320.0 80b833af5991 2 weeks ago 232MB 2026-04-06 07:02:22.881932 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-06 07:02:22.881956 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-06 07:02:22.881967 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-06 07:02:22.882111 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-06 07:02:22.882131 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-06 07:02:22.882142 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-06 07:02:22.882153 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-06 07:02:22.882163 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-06 07:02:22.882174 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-06 07:02:22.882185 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-06 07:02:22.882196 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-06 07:02:22.882206 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-06 07:02:22.882217 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-06 07:02:22.882228 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-06 07:02:22.882274 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-06 07:02:22.882287 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-06 07:02:22.882298 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-06 07:02:22.882320 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-06 07:02:22.882331 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-04-06 07:02:22.882342 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-06 07:02:22.882353 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-06 07:02:22.882363 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-06 07:02:22.882374 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-06 07:02:22.882385 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-06 07:02:23.042590 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-06 07:02:23.042712 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-06 07:02:23.097803 | orchestrator | 2026-04-06 07:02:23.097902 | orchestrator | ## Containers @ testbed-node-0 2026-04-06 07:02:23.097917 | orchestrator | 2026-04-06 07:02:23.097929 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-06 07:02:23.097941 | orchestrator | + echo 2026-04-06 07:02:23.097952 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-06 07:02:23.097964 | orchestrator | + echo 2026-04-06 07:02:23.098008 | orchestrator | + osism container testbed-node-0 ps 2026-04-06 07:02:24.720090 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-06 07:02:24.720298 | orchestrator | da158c6ca677 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 11 seconds ago Up 10 seconds (health: starting) magnum_conductor 2026-04-06 07:02:24.720358 | orchestrator | e4c0e6d4cfd9 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 44 seconds ago Up 43 seconds (healthy) magnum_api 2026-04-06 07:02:24.720372 | orchestrator | cced14b34a5e registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 4 minutes ago Up 4 minutes grafana 2026-04-06 07:02:24.720384 | orchestrator | 3d052d0bb7e7 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-06 07:02:24.720398 | orchestrator | ce881ca27e34 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-06 07:02:24.720409 | orchestrator | fda9f0477119 registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-06 07:02:24.720421 | orchestrator | 9f9e7461aa6e registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-06 07:02:24.720433 | orchestrator | d0d537680b51 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-06 07:02:24.720444 | orchestrator | 97a75a5bfc47 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-06 07:02:24.720480 | orchestrator | 73fd54739b96 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-06 07:02:24.720522 | orchestrator | 463e8e4f780d registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-06 07:02:24.720536 | orchestrator | 0ef2e54ac1bb registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-06 07:02:24.720547 | orchestrator | 8e83cc7ef6e1 registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_worker 2026-04-06 07:02:24.720558 | orchestrator | 89664ec1b53a registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-06 07:02:24.720569 | orchestrator | e5f3f4c744cc registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_health_manager 2026-04-06 07:02:24.720580 | orchestrator | 55007e4a8cf0 registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes octavia_driver_agent 2026-04-06 07:02:24.720592 | orchestrator | 9ec370733d46 registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_api 2026-04-06 07:02:24.720639 | orchestrator | e2af3f3b8066 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-06 07:02:24.720652 | orchestrator | 9e76433cdfcb registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_listener 2026-04-06 07:02:24.720664 | orchestrator | 628b66d645f6 registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 20 minutes (healthy) aodh_evaluator 2026-04-06 07:02:24.720674 | orchestrator | 7b6d3cf64078 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-06 07:02:24.720704 | orchestrator | 50d143975879 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes ceilometer_central 2026-04-06 07:02:24.720717 | orchestrator | 2983336d4091 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) ceilometer_notification 2026-04-06 07:02:24.720911 | orchestrator | 313324298cbd registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) designate_worker 2026-04-06 07:02:24.720931 | orchestrator | c8bd71651713 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-06 07:02:24.720943 | orchestrator | ef56e6d463b9 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-06 07:02:24.720966 | orchestrator | d4e224111fb9 registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-06 07:02:24.721010 | orchestrator | 76dd3d6cc7b5 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-04-06 07:02:24.721021 | orchestrator | 66d05ad85340 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-04-06 07:02:24.721032 | orchestrator | 6ac7aacaf2c9 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-04-06 07:02:24.721043 | orchestrator | c8b2a45dfd11 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-06 07:02:24.721054 | orchestrator | c5bf0c1b3831 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-06 07:02:24.721065 | orchestrator | 5af6161ff98f registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-06 07:02:24.721076 | orchestrator | 94629ef9f601 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-06 07:02:24.721087 | orchestrator | c59434ff668a registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-06 07:02:24.721098 | orchestrator | cac6a5161204 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-06 07:02:24.721109 | orchestrator | 79f598d2755d registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) glance_api 2026-04-06 07:02:24.721120 | orchestrator | 2a8861891b96 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-06 07:02:24.721131 | orchestrator | e182d11c55a0 registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_apiserver 2026-04-06 07:02:24.721141 | orchestrator | 43b7a0638695 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) horizon 2026-04-06 07:02:24.721152 | orchestrator | 4eb6f5529955 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 49 minutes (healthy) nova_novncproxy 2026-04-06 07:02:24.721163 | orchestrator | ef69ddae9a25 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_conductor 2026-04-06 07:02:24.721189 | orchestrator | 665f74a361d8 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-06 07:02:24.721199 | orchestrator | 6ee248300775 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 48 minutes (healthy) nova_api 2026-04-06 07:02:24.721209 | orchestrator | f16ca5d80b93 registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_scheduler 2026-04-06 07:02:24.721218 | orchestrator | 112c1d206282 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-06 07:02:24.721228 | orchestrator | ce40f0fa419f registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-06 07:02:24.721243 | orchestrator | f6f1d6666e53 registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-06 07:02:24.721253 | orchestrator | 06a2773e7bff registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-06 07:02:24.721263 | orchestrator | 6f5d5955e3f9 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-06 07:02:24.721273 | orchestrator | a2eba80abc72 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-06 07:02:24.721282 | orchestrator | baa7ec2e24d6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-0 2026-04-06 07:02:24.721292 | orchestrator | 06ed7bf51830 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-0 2026-04-06 07:02:24.721302 | orchestrator | 01c2ecbb5da7 registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_northd 2026-04-06 07:02:24.721311 | orchestrator | df5007bb8410 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db_relay_1 2026-04-06 07:02:24.721321 | orchestrator | 9da4725c098d registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db 2026-04-06 07:02:24.721331 | orchestrator | a73e2cb27720 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_nb_db 2026-04-06 07:02:24.721340 | orchestrator | 221cb9376573 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_controller 2026-04-06 07:02:24.721351 | orchestrator | 0034f9ca8cdd registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_vswitchd 2026-04-06 07:02:24.721373 | orchestrator | 871908cb713d registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_db 2026-04-06 07:02:24.721383 | orchestrator | 930c98d6e771 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) rabbitmq 2026-04-06 07:02:24.721393 | orchestrator | 90635b10a60c registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 2 hours ago Up 2 hours (healthy) mariadb 2026-04-06 07:02:24.721413 | orchestrator | 7291a714ae6d registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-06 07:02:24.721423 | orchestrator | f3241a62ee43 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-06 07:02:24.721433 | orchestrator | 927547d7be06 registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-06 07:02:24.721443 | orchestrator | ef7073da181d registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-06 07:02:24.721452 | orchestrator | a8b742d145c3 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-06 07:02:24.721462 | orchestrator | c21146fe4582 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-06 07:02:24.721472 | orchestrator | 90b2ab85b3d9 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-06 07:02:24.721481 | orchestrator | a768ff19d5bf registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-06 07:02:24.721491 | orchestrator | 3891e1ad92e1 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-06 07:02:24.721500 | orchestrator | adad1a310352 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-06 07:02:24.721510 | orchestrator | f37f37b92945 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-06 07:02:24.885355 | orchestrator | 2026-04-06 07:02:24.885478 | orchestrator | ## Images @ testbed-node-0 2026-04-06 07:02:24.885506 | orchestrator | 2026-04-06 07:02:24.885527 | orchestrator | + echo 2026-04-06 07:02:24.885547 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-06 07:02:24.885625 | orchestrator | + echo 2026-04-06 07:02:24.885647 | orchestrator | + osism container testbed-node-0 images 2026-04-06 07:02:26.529535 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-06 07:02:26.529729 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 8 days ago 288MB 2026-04-06 07:02:26.529748 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 8 days ago 1.54GB 2026-04-06 07:02:26.529784 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 8 days ago 1.57GB 2026-04-06 07:02:26.529795 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 8 days ago 590MB 2026-04-06 07:02:26.529806 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 8 days ago 277MB 2026-04-06 07:02:26.529817 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 8 days ago 1.04GB 2026-04-06 07:02:26.529828 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 8 days ago 350MB 2026-04-06 07:02:26.529839 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 8 days ago 427MB 2026-04-06 07:02:26.529850 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 8 days ago 683MB 2026-04-06 07:02:26.529883 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 8 days ago 277MB 2026-04-06 07:02:26.529897 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 8 days ago 285MB 2026-04-06 07:02:26.529910 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 8 days ago 293MB 2026-04-06 07:02:26.529924 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 8 days ago 293MB 2026-04-06 07:02:26.529938 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 8 days ago 284MB 2026-04-06 07:02:26.529951 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 8 days ago 284MB 2026-04-06 07:02:26.529968 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 8 days ago 1.2GB 2026-04-06 07:02:26.530149 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 8 days ago 463MB 2026-04-06 07:02:26.530187 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 8 days ago 309MB 2026-04-06 07:02:26.530209 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 8 days ago 368MB 2026-04-06 07:02:26.530233 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 8 days ago 303MB 2026-04-06 07:02:26.530254 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 8 days ago 312MB 2026-04-06 07:02:26.530272 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 8 days ago 317MB 2026-04-06 07:02:26.530290 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 8 days ago 301MB 2026-04-06 07:02:26.530308 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 8 days ago 301MB 2026-04-06 07:02:26.530326 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 8 days ago 301MB 2026-04-06 07:02:26.530346 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 8 days ago 301MB 2026-04-06 07:02:26.530366 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 8 days ago 1.09GB 2026-04-06 07:02:26.530405 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 8 days ago 1.06GB 2026-04-06 07:02:26.530426 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 8 days ago 1.05GB 2026-04-06 07:02:26.530492 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 8 days ago 997MB 2026-04-06 07:02:26.530516 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 8 days ago 996MB 2026-04-06 07:02:26.530538 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 8 days ago 1.07GB 2026-04-06 07:02:26.530561 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 8 days ago 1.07GB 2026-04-06 07:02:26.530583 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 8 days ago 1.05GB 2026-04-06 07:02:26.530602 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 8 days ago 1.05GB 2026-04-06 07:02:26.530622 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 8 days ago 1.05GB 2026-04-06 07:02:26.530641 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 8 days ago 996MB 2026-04-06 07:02:26.530661 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 8 days ago 995MB 2026-04-06 07:02:26.530682 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 8 days ago 995MB 2026-04-06 07:02:26.530702 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 8 days ago 995MB 2026-04-06 07:02:26.530734 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 8 days ago 994MB 2026-04-06 07:02:26.530756 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 8 days ago 1.12GB 2026-04-06 07:02:26.530777 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 8 days ago 1.79GB 2026-04-06 07:02:26.530797 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 8 days ago 1.43GB 2026-04-06 07:02:26.530818 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 8 days ago 1.43GB 2026-04-06 07:02:26.530839 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 8 days ago 1.44GB 2026-04-06 07:02:26.530858 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 8 days ago 1.24GB 2026-04-06 07:02:26.530870 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 8 days ago 1.07GB 2026-04-06 07:02:26.530880 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 8 days ago 1.02GB 2026-04-06 07:02:26.530891 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 8 days ago 1GB 2026-04-06 07:02:26.530902 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 8 days ago 1GB 2026-04-06 07:02:26.530913 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 8 days ago 1GB 2026-04-06 07:02:26.530934 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 8 days ago 1.27GB 2026-04-06 07:02:26.530945 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 8 days ago 1.15GB 2026-04-06 07:02:26.530956 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 8 days ago 1.01GB 2026-04-06 07:02:26.530967 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 8 days ago 1GB 2026-04-06 07:02:26.531009 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 8 days ago 1GB 2026-04-06 07:02:26.531020 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 8 days ago 1.01GB 2026-04-06 07:02:26.531031 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 8 days ago 1GB 2026-04-06 07:02:26.531042 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 8 days ago 1GB 2026-04-06 07:02:26.531067 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 8 days ago 1.23GB 2026-04-06 07:02:26.531079 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 8 days ago 1.39GB 2026-04-06 07:02:26.531090 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 8 days ago 1.23GB 2026-04-06 07:02:26.531101 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 8 days ago 1.23GB 2026-04-06 07:02:26.531112 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 8 days ago 1.07GB 2026-04-06 07:02:26.531123 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 8 days ago 1.07GB 2026-04-06 07:02:26.531134 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 8 days ago 1.07GB 2026-04-06 07:02:26.531144 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 8 days ago 1.24GB 2026-04-06 07:02:26.531155 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 8 days ago 301MB 2026-04-06 07:02:26.531166 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-06 07:02:26.531177 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-06 07:02:26.531187 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-06 07:02:26.531198 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-06 07:02:26.531209 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-06 07:02:26.531226 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-06 07:02:26.531237 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-06 07:02:26.531248 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-06 07:02:26.531265 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-06 07:02:26.531276 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-06 07:02:26.531287 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-06 07:02:26.531298 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-06 07:02:26.531309 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-06 07:02:26.531319 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-06 07:02:26.531330 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-06 07:02:26.531341 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-06 07:02:26.531352 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-06 07:02:26.531362 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-06 07:02:26.531373 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-06 07:02:26.531384 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-06 07:02:26.531395 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-06 07:02:26.531412 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-06 07:02:26.531423 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-06 07:02:26.531434 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-06 07:02:26.531445 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-06 07:02:26.531456 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-06 07:02:26.531466 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-06 07:02:26.531477 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-06 07:02:26.531488 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-06 07:02:26.531498 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-06 07:02:26.531509 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-06 07:02:26.531520 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-06 07:02:26.531531 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-06 07:02:26.531547 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-06 07:02:26.531558 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-06 07:02:26.531569 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-06 07:02:26.531580 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-06 07:02:26.531591 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-06 07:02:26.531602 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-06 07:02:26.531612 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-06 07:02:26.531623 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-06 07:02:26.531634 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-06 07:02:26.531645 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-06 07:02:26.531656 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-06 07:02:26.531676 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-06 07:02:26.531688 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-06 07:02:26.531699 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-06 07:02:26.531710 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-06 07:02:26.531720 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-06 07:02:26.531731 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-06 07:02:26.531742 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-06 07:02:26.531758 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-06 07:02:26.531770 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-06 07:02:26.531780 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-06 07:02:26.531791 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-06 07:02:26.531802 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-06 07:02:26.531812 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-06 07:02:26.531823 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-06 07:02:26.531841 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-06 07:02:26.531851 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-06 07:02:26.531862 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-06 07:02:26.531873 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-06 07:02:26.531883 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-06 07:02:26.531894 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-06 07:02:26.531905 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-06 07:02:26.531916 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-06 07:02:26.531926 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-06 07:02:26.531937 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-06 07:02:26.531948 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-06 07:02:26.690490 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-06 07:02:26.690956 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-06 07:02:26.749658 | orchestrator | 2026-04-06 07:02:26.749759 | orchestrator | ## Containers @ testbed-node-1 2026-04-06 07:02:26.749773 | orchestrator | 2026-04-06 07:02:26.749784 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-06 07:02:26.749794 | orchestrator | + echo 2026-04-06 07:02:26.749805 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-06 07:02:26.749816 | orchestrator | + echo 2026-04-06 07:02:26.749826 | orchestrator | + osism container testbed-node-1 ps 2026-04-06 07:02:28.308486 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-06 07:02:28.308590 | orchestrator | 299bbd1b7780 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 14 seconds ago Up 12 seconds (health: starting) magnum_conductor 2026-04-06 07:02:28.308608 | orchestrator | 20f96ed99927 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 38 seconds ago Up 37 seconds (healthy) magnum_api 2026-04-06 07:02:28.308620 | orchestrator | 5ca9b6a2d58b registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-06 07:02:28.308631 | orchestrator | f5f7aefb17dc registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-06 07:02:28.308644 | orchestrator | 1ddee8c342bc registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-06 07:02:28.308655 | orchestrator | 07e3ce320375 registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-06 07:02:28.308667 | orchestrator | ef242254c440 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-06 07:02:28.308708 | orchestrator | 6b2d6d51440c registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-06 07:02:28.308720 | orchestrator | e3d268e2cebf registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-06 07:02:28.308749 | orchestrator | fa322d28260a registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-06 07:02:28.308761 | orchestrator | f818956d7509 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-06 07:02:28.308772 | orchestrator | eea916ecfe9b registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-06 07:02:28.308783 | orchestrator | ff9a384c10f5 registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_worker 2026-04-06 07:02:28.308794 | orchestrator | 30f0cfbfe724 registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-06 07:02:28.308805 | orchestrator | 99ca862f7a59 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_health_manager 2026-04-06 07:02:28.308816 | orchestrator | ab457ad123ab registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes octavia_driver_agent 2026-04-06 07:02:28.308827 | orchestrator | e791aa49f42d registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_api 2026-04-06 07:02:28.308857 | orchestrator | 86f41aea2ba2 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-06 07:02:28.308869 | orchestrator | bbfb1453ce8e registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_listener 2026-04-06 07:02:28.308880 | orchestrator | df629cf63802 registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_evaluator 2026-04-06 07:02:28.309358 | orchestrator | dccaf2e8d4b0 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-06 07:02:28.309378 | orchestrator | b3663ccf0015 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes ceilometer_central 2026-04-06 07:02:28.309390 | orchestrator | 45a7de8cd8ff registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) ceilometer_notification 2026-04-06 07:02:28.309418 | orchestrator | 9acecbd31dff registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) designate_worker 2026-04-06 07:02:28.309429 | orchestrator | 1fd1de9e113f registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-06 07:02:28.309440 | orchestrator | e0ec99f01cff registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-06 07:02:28.309451 | orchestrator | e74361f6a6e2 registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-06 07:02:28.309461 | orchestrator | de36efe8d718 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-04-06 07:02:28.309472 | orchestrator | f276d6f1802d registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-04-06 07:02:28.309483 | orchestrator | 12398f874f75 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-04-06 07:02:28.309494 | orchestrator | 9bc423190256 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-06 07:02:28.309505 | orchestrator | b1df8b1d8b99 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-06 07:02:28.309515 | orchestrator | ca19b17189f0 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-06 07:02:28.309526 | orchestrator | c924b4f43cdd registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-06 07:02:28.309537 | orchestrator | a3d4fcc7ada5 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-06 07:02:28.309548 | orchestrator | ab13e51328e0 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-06 07:02:28.309559 | orchestrator | 368434db2ad1 registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) glance_api 2026-04-06 07:02:28.309570 | orchestrator | b09269369497 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-06 07:02:28.309580 | orchestrator | 510f52af8dd3 registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_apiserver 2026-04-06 07:02:28.309598 | orchestrator | 0be453d36a7c registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) horizon 2026-04-06 07:02:28.309622 | orchestrator | cbf032c8dbf0 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 49 minutes (healthy) nova_novncproxy 2026-04-06 07:02:28.309633 | orchestrator | 2390eee0e6d7 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_conductor 2026-04-06 07:02:28.309644 | orchestrator | 087b4585c8b2 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-06 07:02:28.309655 | orchestrator | 0c97ae9f336b registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_api 2026-04-06 07:02:28.309666 | orchestrator | a945b4174fde registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_scheduler 2026-04-06 07:02:28.309677 | orchestrator | 3004f902715d registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-06 07:02:28.309687 | orchestrator | 8e212ca87dca registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-06 07:02:28.309698 | orchestrator | cde3e7f2ce0b registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-06 07:02:28.309710 | orchestrator | b2f9e2ef2857 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-06 07:02:28.309720 | orchestrator | 23b7a17c2d02 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-06 07:02:28.309731 | orchestrator | 3c733e0da1cb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-06 07:02:28.309742 | orchestrator | 903c7cfde5e3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-1 2026-04-06 07:02:28.309860 | orchestrator | 6879ce368bbc registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-1 2026-04-06 07:02:28.309877 | orchestrator | fcd9a7cb818a registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_northd 2026-04-06 07:02:28.309889 | orchestrator | 7f4e38c24a3d registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db_relay_1 2026-04-06 07:02:28.309899 | orchestrator | 16c8fc2197bf registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db 2026-04-06 07:02:28.309910 | orchestrator | 5f4d29e96174 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_nb_db 2026-04-06 07:02:28.309931 | orchestrator | 9e19192bebd7 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_controller 2026-04-06 07:02:28.309942 | orchestrator | 7b46199ba11d registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_vswitchd 2026-04-06 07:02:28.309953 | orchestrator | f80b10d1ea92 registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_db 2026-04-06 07:02:28.309964 | orchestrator | 6868ef379e94 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) rabbitmq 2026-04-06 07:02:28.310004 | orchestrator | 075b03c8ff78 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 2 hours ago Up 2 hours (healthy) mariadb 2026-04-06 07:02:28.310056 | orchestrator | 671ca6c0c797 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-06 07:02:28.310070 | orchestrator | 50fa8f32cbbf registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-06 07:02:28.310081 | orchestrator | 4d07a6a68bd5 registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-06 07:02:28.310092 | orchestrator | adcee402fcac registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-06 07:02:28.310103 | orchestrator | 70de756f6abf registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-06 07:02:28.310114 | orchestrator | 020ec1702718 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-06 07:02:28.310125 | orchestrator | de30f2af4d1a registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-06 07:02:28.310135 | orchestrator | 6e2f2e51c86a registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-06 07:02:28.310146 | orchestrator | becd49fad76e registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-06 07:02:28.310157 | orchestrator | 863c667699fc registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-06 07:02:28.310177 | orchestrator | dcf5c80077a6 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-06 07:02:28.455067 | orchestrator | 2026-04-06 07:02:28.455164 | orchestrator | ## Images @ testbed-node-1 2026-04-06 07:02:28.455178 | orchestrator | 2026-04-06 07:02:28.455216 | orchestrator | + echo 2026-04-06 07:02:28.455229 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-06 07:02:28.455241 | orchestrator | + echo 2026-04-06 07:02:28.455253 | orchestrator | + osism container testbed-node-1 images 2026-04-06 07:02:30.065511 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-06 07:02:30.065718 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 8 days ago 288MB 2026-04-06 07:02:30.065751 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 8 days ago 1.54GB 2026-04-06 07:02:30.065779 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 8 days ago 1.57GB 2026-04-06 07:02:30.065791 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 8 days ago 590MB 2026-04-06 07:02:30.065802 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 8 days ago 277MB 2026-04-06 07:02:30.065813 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 8 days ago 1.04GB 2026-04-06 07:02:30.065828 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 8 days ago 350MB 2026-04-06 07:02:30.065839 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 8 days ago 427MB 2026-04-06 07:02:30.065850 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 8 days ago 683MB 2026-04-06 07:02:30.065861 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 8 days ago 277MB 2026-04-06 07:02:30.065872 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 8 days ago 285MB 2026-04-06 07:02:30.065883 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 8 days ago 293MB 2026-04-06 07:02:30.065893 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 8 days ago 293MB 2026-04-06 07:02:30.065904 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 8 days ago 284MB 2026-04-06 07:02:30.065915 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 8 days ago 284MB 2026-04-06 07:02:30.065926 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 8 days ago 1.2GB 2026-04-06 07:02:30.065937 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 8 days ago 463MB 2026-04-06 07:02:30.065948 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 8 days ago 309MB 2026-04-06 07:02:30.065958 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 8 days ago 368MB 2026-04-06 07:02:30.065969 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 8 days ago 303MB 2026-04-06 07:02:30.066012 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 8 days ago 312MB 2026-04-06 07:02:30.066121 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 8 days ago 317MB 2026-04-06 07:02:30.066141 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 8 days ago 301MB 2026-04-06 07:02:30.066213 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 8 days ago 301MB 2026-04-06 07:02:30.066229 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 8 days ago 301MB 2026-04-06 07:02:30.066240 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 8 days ago 301MB 2026-04-06 07:02:30.066251 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 8 days ago 1.09GB 2026-04-06 07:02:30.066262 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 8 days ago 1.06GB 2026-04-06 07:02:30.066273 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 8 days ago 1.05GB 2026-04-06 07:02:30.066305 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 8 days ago 997MB 2026-04-06 07:02:30.066317 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 8 days ago 996MB 2026-04-06 07:02:30.066328 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 8 days ago 1.07GB 2026-04-06 07:02:30.066339 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 8 days ago 1.07GB 2026-04-06 07:02:30.066349 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 8 days ago 1.05GB 2026-04-06 07:02:30.066360 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 8 days ago 1.05GB 2026-04-06 07:02:30.066371 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 8 days ago 1.05GB 2026-04-06 07:02:30.066381 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 8 days ago 996MB 2026-04-06 07:02:30.066392 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 8 days ago 995MB 2026-04-06 07:02:30.066403 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 8 days ago 995MB 2026-04-06 07:02:30.066414 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 8 days ago 995MB 2026-04-06 07:02:30.066424 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 8 days ago 994MB 2026-04-06 07:02:30.066435 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 8 days ago 1.12GB 2026-04-06 07:02:30.066446 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 8 days ago 1.79GB 2026-04-06 07:02:30.066456 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 8 days ago 1.43GB 2026-04-06 07:02:30.066467 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 8 days ago 1.43GB 2026-04-06 07:02:30.066478 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 8 days ago 1.44GB 2026-04-06 07:02:30.066489 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 8 days ago 1.24GB 2026-04-06 07:02:30.066500 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 8 days ago 1.07GB 2026-04-06 07:02:30.066518 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 8 days ago 1.02GB 2026-04-06 07:02:30.066529 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 8 days ago 1GB 2026-04-06 07:02:30.066540 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 8 days ago 1GB 2026-04-06 07:02:30.066551 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 8 days ago 1GB 2026-04-06 07:02:30.066562 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 8 days ago 1.27GB 2026-04-06 07:02:30.066573 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 8 days ago 1.15GB 2026-04-06 07:02:30.066584 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 8 days ago 1.01GB 2026-04-06 07:02:30.066594 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 8 days ago 1GB 2026-04-06 07:02:30.066605 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 8 days ago 1GB 2026-04-06 07:02:30.066616 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 8 days ago 1.01GB 2026-04-06 07:02:30.066627 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 8 days ago 1GB 2026-04-06 07:02:30.066637 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 8 days ago 1GB 2026-04-06 07:02:30.066655 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 8 days ago 1.23GB 2026-04-06 07:02:30.066667 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 8 days ago 1.39GB 2026-04-06 07:02:30.066677 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 8 days ago 1.23GB 2026-04-06 07:02:30.066688 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 8 days ago 1.23GB 2026-04-06 07:02:30.066706 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 8 days ago 1.07GB 2026-04-06 07:02:30.066717 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 8 days ago 1.07GB 2026-04-06 07:02:30.066728 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 8 days ago 1.07GB 2026-04-06 07:02:30.066743 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 8 days ago 1.24GB 2026-04-06 07:02:30.066754 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 8 days ago 301MB 2026-04-06 07:02:30.066765 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-06 07:02:30.066776 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-06 07:02:30.066786 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-06 07:02:30.066797 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-06 07:02:30.066814 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-06 07:02:30.066825 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-06 07:02:30.066836 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-06 07:02:30.066846 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-06 07:02:30.066857 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-06 07:02:30.066868 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-06 07:02:30.066879 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-06 07:02:30.066889 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-06 07:02:30.066919 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-06 07:02:30.066932 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-06 07:02:30.066943 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-06 07:02:30.066954 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-06 07:02:30.066964 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-06 07:02:30.066999 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-06 07:02:30.067015 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-06 07:02:30.067026 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-06 07:02:30.067037 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-06 07:02:30.067054 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-06 07:02:30.067065 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-06 07:02:30.067076 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-06 07:02:30.067087 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-06 07:02:30.067098 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-06 07:02:30.067109 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-06 07:02:30.067119 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-06 07:02:30.067135 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-06 07:02:30.067146 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-06 07:02:30.067164 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-06 07:02:30.067175 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-06 07:02:30.067186 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-06 07:02:30.067197 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-06 07:02:30.067212 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-06 07:02:30.067231 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-06 07:02:30.067250 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-06 07:02:30.067268 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-06 07:02:30.067286 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-06 07:02:30.067304 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-06 07:02:30.067322 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-06 07:02:30.067341 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-06 07:02:30.067358 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-06 07:02:30.067375 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-06 07:02:30.067393 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-06 07:02:30.067408 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-06 07:02:30.067428 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-06 07:02:30.067446 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-06 07:02:30.067463 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-06 07:02:30.067482 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-06 07:02:30.067499 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-06 07:02:30.067529 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-06 07:02:30.067549 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-06 07:02:30.067567 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-06 07:02:30.067586 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-06 07:02:30.067668 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-06 07:02:30.067679 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-06 07:02:30.067690 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-06 07:02:30.067701 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-06 07:02:30.067712 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-06 07:02:30.067722 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-06 07:02:30.067733 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-06 07:02:30.067744 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-06 07:02:30.067755 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-06 07:02:30.067766 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-06 07:02:30.067784 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-06 07:02:30.067797 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-06 07:02:30.067816 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-06 07:02:30.067834 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-06 07:02:30.219659 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-06 07:02:30.219756 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-06 07:02:30.276299 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-06 07:02:30.276397 | orchestrator | + echo 2026-04-06 07:02:30.276559 | orchestrator | 2026-04-06 07:02:30.276580 | orchestrator | ## Containers @ testbed-node-2 2026-04-06 07:02:30.276592 | orchestrator | 2026-04-06 07:02:30.276603 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-06 07:02:30.276615 | orchestrator | + echo 2026-04-06 07:02:30.276626 | orchestrator | + osism container testbed-node-2 ps 2026-04-06 07:02:31.833730 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-06 07:02:31.833868 | orchestrator | 6b7087e82bc9 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 18 seconds ago Up 17 seconds (health: starting) magnum_conductor 2026-04-06 07:02:31.833885 | orchestrator | aea84f02e5b6 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 41 seconds ago Up 40 seconds (healthy) magnum_api 2026-04-06 07:02:31.833897 | orchestrator | 689a8cbd94de registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-06 07:02:31.833908 | orchestrator | 52165f7571a2 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-06 07:02:31.833946 | orchestrator | 8fea1f484d10 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-06 07:02:31.833959 | orchestrator | 3b390bc277af registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_memcached_exporter 2026-04-06 07:02:31.833970 | orchestrator | 9508ff87506d registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-06 07:02:31.834157 | orchestrator | f63bef842ef4 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-06 07:02:31.834171 | orchestrator | 3d191b8ea2b8 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) manila_share 2026-04-06 07:02:31.834182 | orchestrator | 81ee4fc6e720 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-06 07:02:31.834209 | orchestrator | 6cdf8dc88678 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-04-06 07:02:31.834221 | orchestrator | 55bb5d69e535 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-06 07:02:31.834232 | orchestrator | 4ac3b68621e8 registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_worker 2026-04-06 07:02:31.834243 | orchestrator | dfb8af2bbac9 registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_housekeeping 2026-04-06 07:02:31.834255 | orchestrator | 72b59f2c425b registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_health_manager 2026-04-06 07:02:31.834266 | orchestrator | bac2210d78bf registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes octavia_driver_agent 2026-04-06 07:02:31.834276 | orchestrator | 6ed1090e4fcb registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) octavia_api 2026-04-06 07:02:31.834307 | orchestrator | 77b550f84e41 registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_notifier 2026-04-06 07:02:31.834323 | orchestrator | c5a7e687e241 registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_listener 2026-04-06 07:02:31.834336 | orchestrator | 6f45edbddecd registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_evaluator 2026-04-06 07:02:31.834350 | orchestrator | c10cb211e831 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) aodh_api 2026-04-06 07:02:31.834374 | orchestrator | e5df96b58013 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes ceilometer_central 2026-04-06 07:02:31.834387 | orchestrator | 1b83b1de37f6 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) ceilometer_notification 2026-04-06 07:02:31.834400 | orchestrator | 5b23d8846844 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) designate_worker 2026-04-06 07:02:31.834414 | orchestrator | f6295620492f registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-06 07:02:31.834426 | orchestrator | cfaa4b0164be registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-06 07:02:31.834439 | orchestrator | c6df6e526787 registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-04-06 07:02:31.834452 | orchestrator | 30ced223cbe1 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-04-06 07:02:31.834465 | orchestrator | 152a8e2ab1c1 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-04-06 07:02:31.834478 | orchestrator | beccd37a23b0 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) barbican_worker 2026-04-06 07:02:31.834491 | orchestrator | e62867e86728 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-04-06 07:02:31.834504 | orchestrator | 263711202ec5 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-04-06 07:02:31.834517 | orchestrator | b845e9496879 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_backup 2026-04-06 07:02:31.834531 | orchestrator | c464e5f220f3 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 34 minutes ago Up 33 minutes (healthy) cinder_volume 2026-04-06 07:02:31.834545 | orchestrator | 928cfdcbcfe4 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_scheduler 2026-04-06 07:02:31.834559 | orchestrator | bb86db002105 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 35 minutes ago Up 33 minutes (healthy) cinder_api 2026-04-06 07:02:31.834579 | orchestrator | 2b0918654b1f registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) glance_api 2026-04-06 07:02:31.834593 | orchestrator | 5af112c52bcf registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_console 2026-04-06 07:02:31.834613 | orchestrator | ffa8ddb1b61f registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) skyline_apiserver 2026-04-06 07:02:31.834626 | orchestrator | a6b779cf81a5 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) horizon 2026-04-06 07:02:31.834639 | orchestrator | 5731e767474b registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 49 minutes (healthy) nova_novncproxy 2026-04-06 07:02:31.834653 | orchestrator | 3506fed9e85e registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_conductor 2026-04-06 07:02:31.834666 | orchestrator | c319dddef7f4 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) nova_metadata 2026-04-06 07:02:31.834680 | orchestrator | 21f1007ed4b3 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_api 2026-04-06 07:02:31.834699 | orchestrator | d590d4803dc5 registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" About an hour ago Up 49 minutes (healthy) nova_scheduler 2026-04-06 07:02:31.834710 | orchestrator | 97baae76238f registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-06 07:02:31.834721 | orchestrator | b7b4080b90e3 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-06 07:02:31.834732 | orchestrator | cacd33ecfb2e registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-06 07:02:31.834748 | orchestrator | 0e978aa8b643 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-06 07:02:31.834759 | orchestrator | d51c4cdec446 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-06 07:02:31.834770 | orchestrator | 18c8675ee799 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-06 07:02:31.834781 | orchestrator | 51ceae878f08 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-2 2026-04-06 07:02:31.834791 | orchestrator | a00606ebddc6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-2 2026-04-06 07:02:31.834802 | orchestrator | 6e7e1a603fae registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_northd 2026-04-06 07:02:31.834819 | orchestrator | 10a991035e0e registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db_relay_1 2026-04-06 07:02:31.834830 | orchestrator | 8319f9d412dd registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db 2026-04-06 07:02:31.834848 | orchestrator | 1eb7531deb27 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_nb_db 2026-04-06 07:02:31.834860 | orchestrator | 34815ad0858c registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_controller 2026-04-06 07:02:31.834870 | orchestrator | 8c7d2ff0f749 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_vswitchd 2026-04-06 07:02:31.834881 | orchestrator | c4eacdb04deb registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) openvswitch_db 2026-04-06 07:02:31.834892 | orchestrator | 08c9227857b2 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) rabbitmq 2026-04-06 07:02:31.834903 | orchestrator | 79649c191a16 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 2 hours ago Up 2 hours (healthy) mariadb 2026-04-06 07:02:31.834914 | orchestrator | a3732f4d9c54 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis_sentinel 2026-04-06 07:02:31.834924 | orchestrator | 27a1e5642d07 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) redis 2026-04-06 07:02:31.834935 | orchestrator | c8a6a111da29 registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) memcached 2026-04-06 07:02:31.834946 | orchestrator | c04e4c5fb75a registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-04-06 07:02:31.834957 | orchestrator | ec68acfd1daf registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-04-06 07:02:31.834968 | orchestrator | c19ceaa65267 registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-04-06 07:02:31.835015 | orchestrator | 9ee45bcd52b9 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-04-06 07:02:31.835027 | orchestrator | b4c4d0a721dd registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-04-06 07:02:31.835039 | orchestrator | 978cf2200b00 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-06 07:02:31.835056 | orchestrator | 8e0ec8159db7 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-06 07:02:31.835067 | orchestrator | 26d6879ffbde registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-06 07:02:31.980675 | orchestrator | 2026-04-06 07:02:31.980756 | orchestrator | ## Images @ testbed-node-2 2026-04-06 07:02:31.980767 | orchestrator | 2026-04-06 07:02:31.980777 | orchestrator | + echo 2026-04-06 07:02:31.980786 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-06 07:02:31.980796 | orchestrator | + echo 2026-04-06 07:02:31.980805 | orchestrator | + osism container testbed-node-2 images 2026-04-06 07:02:33.529486 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-06 07:02:33.529587 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 8 days ago 288MB 2026-04-06 07:02:33.529602 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 8 days ago 1.54GB 2026-04-06 07:02:33.529613 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 8 days ago 1.57GB 2026-04-06 07:02:33.529624 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 8 days ago 590MB 2026-04-06 07:02:33.529635 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 8 days ago 277MB 2026-04-06 07:02:33.529646 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 8 days ago 1.04GB 2026-04-06 07:02:33.529656 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 8 days ago 427MB 2026-04-06 07:02:33.529667 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 8 days ago 350MB 2026-04-06 07:02:33.529678 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 8 days ago 683MB 2026-04-06 07:02:33.529688 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 8 days ago 277MB 2026-04-06 07:02:33.529699 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 8 days ago 285MB 2026-04-06 07:02:33.529710 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 8 days ago 293MB 2026-04-06 07:02:33.529721 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 8 days ago 293MB 2026-04-06 07:02:33.529731 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 8 days ago 284MB 2026-04-06 07:02:33.529742 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 8 days ago 284MB 2026-04-06 07:02:33.529753 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 8 days ago 1.2GB 2026-04-06 07:02:33.529763 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 8 days ago 463MB 2026-04-06 07:02:33.529774 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 8 days ago 309MB 2026-04-06 07:02:33.529785 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 8 days ago 368MB 2026-04-06 07:02:33.529795 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 8 days ago 303MB 2026-04-06 07:02:33.529830 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 8 days ago 312MB 2026-04-06 07:02:33.529913 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 8 days ago 317MB 2026-04-06 07:02:33.529928 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 8 days ago 301MB 2026-04-06 07:02:33.529939 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 8 days ago 301MB 2026-04-06 07:02:33.529950 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 8 days ago 301MB 2026-04-06 07:02:33.529961 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 8 days ago 301MB 2026-04-06 07:02:33.529972 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 8 days ago 1.09GB 2026-04-06 07:02:33.530009 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 8 days ago 1.06GB 2026-04-06 07:02:33.530150 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 8 days ago 1.05GB 2026-04-06 07:02:33.530183 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 8 days ago 997MB 2026-04-06 07:02:33.530195 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 8 days ago 996MB 2026-04-06 07:02:33.530206 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 8 days ago 1.07GB 2026-04-06 07:02:33.530217 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 8 days ago 1.07GB 2026-04-06 07:02:33.530228 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 8 days ago 1.05GB 2026-04-06 07:02:33.530238 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 8 days ago 1.05GB 2026-04-06 07:02:33.530249 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 8 days ago 1.05GB 2026-04-06 07:02:33.530260 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 8 days ago 996MB 2026-04-06 07:02:33.530271 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 8 days ago 995MB 2026-04-06 07:02:33.530281 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 8 days ago 995MB 2026-04-06 07:02:33.530292 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 8 days ago 995MB 2026-04-06 07:02:33.530303 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 8 days ago 994MB 2026-04-06 07:02:33.530331 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 8 days ago 1.12GB 2026-04-06 07:02:33.530342 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 8 days ago 1.79GB 2026-04-06 07:02:33.530353 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 8 days ago 1.43GB 2026-04-06 07:02:33.530364 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 8 days ago 1.43GB 2026-04-06 07:02:33.530385 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 8 days ago 1.44GB 2026-04-06 07:02:33.530395 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 8 days ago 1.24GB 2026-04-06 07:02:33.530406 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 8 days ago 1.07GB 2026-04-06 07:02:33.530417 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 8 days ago 1.02GB 2026-04-06 07:02:33.530427 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 8 days ago 1GB 2026-04-06 07:02:33.530438 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 8 days ago 1GB 2026-04-06 07:02:33.530453 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 8 days ago 1GB 2026-04-06 07:02:33.530464 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 8 days ago 1.27GB 2026-04-06 07:02:33.530475 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 8 days ago 1.15GB 2026-04-06 07:02:33.530486 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 8 days ago 1.01GB 2026-04-06 07:02:33.530496 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 8 days ago 1GB 2026-04-06 07:02:33.530507 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 8 days ago 1GB 2026-04-06 07:02:33.530517 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 8 days ago 1.01GB 2026-04-06 07:02:33.530528 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 8 days ago 1GB 2026-04-06 07:02:33.530539 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 8 days ago 1GB 2026-04-06 07:02:33.530556 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 8 days ago 1.23GB 2026-04-06 07:02:33.530568 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 8 days ago 1.39GB 2026-04-06 07:02:33.530578 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 8 days ago 1.23GB 2026-04-06 07:02:33.530589 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 8 days ago 1.23GB 2026-04-06 07:02:33.530600 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 8 days ago 1.07GB 2026-04-06 07:02:33.530610 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 8 days ago 1.07GB 2026-04-06 07:02:33.530621 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 8 days ago 1.07GB 2026-04-06 07:02:33.530631 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 8 days ago 1.24GB 2026-04-06 07:02:33.530642 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 8 days ago 301MB 2026-04-06 07:02:33.530653 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-06 07:02:33.530677 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-06 07:02:33.530688 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-06 07:02:33.530699 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-06 07:02:33.530710 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-06 07:02:33.530721 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-06 07:02:33.530731 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-06 07:02:33.530742 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-06 07:02:33.530753 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-06 07:02:33.530763 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-06 07:02:33.530774 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-06 07:02:33.530784 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-06 07:02:33.530795 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-06 07:02:33.530810 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-06 07:02:33.530821 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-06 07:02:33.530832 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-06 07:02:33.530842 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-06 07:02:33.530853 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-06 07:02:33.530863 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-06 07:02:33.530886 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-06 07:02:33.530897 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-06 07:02:33.530915 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-06 07:02:33.530926 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-06 07:02:33.530936 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-06 07:02:33.530947 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-06 07:02:33.530958 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-06 07:02:33.530998 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-06 07:02:33.531010 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-06 07:02:33.531021 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-06 07:02:33.531032 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-06 07:02:33.531042 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-06 07:02:33.531053 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-06 07:02:33.531064 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-06 07:02:33.531075 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-06 07:02:33.531085 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-06 07:02:33.531096 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-06 07:02:33.531107 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-06 07:02:33.531118 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-06 07:02:33.531129 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-06 07:02:33.531140 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-06 07:02:33.531150 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-06 07:02:33.531161 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-06 07:02:33.531172 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-06 07:02:33.531183 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-06 07:02:33.531194 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-06 07:02:33.531204 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-06 07:02:33.531215 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-06 07:02:33.531226 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-06 07:02:33.531236 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-06 07:02:33.531247 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-06 07:02:33.531258 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-06 07:02:33.531281 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-06 07:02:33.531293 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-06 07:02:33.531303 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-06 07:02:33.531314 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-06 07:02:33.531325 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-06 07:02:33.531336 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-06 07:02:33.531347 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-06 07:02:33.531357 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-06 07:02:33.531368 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-06 07:02:33.531379 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-06 07:02:33.531390 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-06 07:02:33.531400 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-06 07:02:33.531411 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-06 07:02:33.531422 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-06 07:02:33.531432 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-06 07:02:33.531443 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-06 07:02:33.531454 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-06 07:02:33.531465 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-06 07:02:33.685595 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-06 07:02:33.690697 | orchestrator | + set -e 2026-04-06 07:02:33.690747 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 07:02:33.690758 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 07:02:33.690768 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 07:02:33.690776 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 07:02:33.691196 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 07:02:33.691213 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 07:02:33.691223 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 07:02:33.691231 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 07:02:33.691240 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 07:02:33.691249 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 07:02:33.691258 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 07:02:33.691266 | orchestrator | ++ export ARA=false 2026-04-06 07:02:33.691356 | orchestrator | ++ ARA=false 2026-04-06 07:02:33.691368 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 07:02:33.691376 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 07:02:33.691385 | orchestrator | ++ export TEMPEST=false 2026-04-06 07:02:33.691394 | orchestrator | ++ TEMPEST=false 2026-04-06 07:02:33.691403 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 07:02:33.691430 | orchestrator | ++ IS_ZUUL=true 2026-04-06 07:02:33.691439 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 07:02:33.691448 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 07:02:33.691457 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 07:02:33.691469 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 07:02:33.691478 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 07:02:33.691487 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 07:02:33.691495 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 07:02:33.691504 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 07:02:33.691513 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 07:02:33.691521 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 07:02:33.691529 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-06 07:02:33.691538 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-06 07:02:33.691547 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-06 07:02:33.691556 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-06 07:02:33.699068 | orchestrator | + set -e 2026-04-06 07:02:33.699121 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 07:02:33.699136 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 07:02:33.699151 | orchestrator | ++ INTERACTIVE=false 2026-04-06 07:02:33.699165 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 07:02:33.699178 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 07:02:33.699193 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-06 07:02:33.700581 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-06 07:02:33.704596 | orchestrator | 2026-04-06 07:02:33.704617 | orchestrator | # Ceph status 2026-04-06 07:02:33.704629 | orchestrator | 2026-04-06 07:02:33.704641 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-06 07:02:33.704652 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-06 07:02:33.704665 | orchestrator | + echo 2026-04-06 07:02:33.704676 | orchestrator | + echo '# Ceph status' 2026-04-06 07:02:33.704688 | orchestrator | + echo 2026-04-06 07:02:33.704700 | orchestrator | + ceph -s 2026-04-06 07:02:34.387498 | orchestrator | cluster: 2026-04-06 07:02:34.387651 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-06 07:02:34.387662 | orchestrator | health: HEALTH_OK 2026-04-06 07:02:34.387670 | orchestrator | 2026-04-06 07:02:34.387677 | orchestrator | services: 2026-04-06 07:02:34.387683 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 107m) 2026-04-06 07:02:34.387701 | orchestrator | mgr: testbed-node-0(active, since 102m), standbys: testbed-node-1, testbed-node-2 2026-04-06 07:02:34.387708 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-06 07:02:34.387714 | orchestrator | osd: 6 osds: 6 up (since 94m), 6 in (since 4h) 2026-04-06 07:02:34.387721 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-06 07:02:34.387727 | orchestrator | 2026-04-06 07:02:34.387734 | orchestrator | data: 2026-04-06 07:02:34.387740 | orchestrator | volumes: 1/1 healthy 2026-04-06 07:02:34.387746 | orchestrator | pools: 14 pools, 401 pgs 2026-04-06 07:02:34.387753 | orchestrator | objects: 821 objects, 2.8 GiB 2026-04-06 07:02:34.387759 | orchestrator | usage: 8.0 GiB used, 112 GiB / 120 GiB avail 2026-04-06 07:02:34.387765 | orchestrator | pgs: 401 active+clean 2026-04-06 07:02:34.387771 | orchestrator | 2026-04-06 07:02:34.387778 | orchestrator | io: 2026-04-06 07:02:34.387784 | orchestrator | client: 1.3 KiB/s rd, 1 op/s rd, 0 op/s wr 2026-04-06 07:02:34.387790 | orchestrator | 2026-04-06 07:02:34.434704 | orchestrator | 2026-04-06 07:02:34.434787 | orchestrator | # Ceph versions 2026-04-06 07:02:34.434797 | orchestrator | 2026-04-06 07:02:34.434805 | orchestrator | + echo 2026-04-06 07:02:34.434813 | orchestrator | + echo '# Ceph versions' 2026-04-06 07:02:34.434821 | orchestrator | + echo 2026-04-06 07:02:34.434828 | orchestrator | + ceph versions 2026-04-06 07:02:35.014174 | orchestrator | { 2026-04-06 07:02:35.014279 | orchestrator | "mon": { 2026-04-06 07:02:35.014326 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-06 07:02:35.014338 | orchestrator | }, 2026-04-06 07:02:35.014346 | orchestrator | "mgr": { 2026-04-06 07:02:35.014355 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-06 07:02:35.014363 | orchestrator | }, 2026-04-06 07:02:35.014371 | orchestrator | "osd": { 2026-04-06 07:02:35.014382 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-06 07:02:35.014396 | orchestrator | }, 2026-04-06 07:02:35.014411 | orchestrator | "mds": { 2026-04-06 07:02:35.014426 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-06 07:02:35.014463 | orchestrator | }, 2026-04-06 07:02:35.014476 | orchestrator | "rgw": { 2026-04-06 07:02:35.014490 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-06 07:02:35.014502 | orchestrator | }, 2026-04-06 07:02:35.014515 | orchestrator | "overall": { 2026-04-06 07:02:35.014528 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-06 07:02:35.014541 | orchestrator | } 2026-04-06 07:02:35.014554 | orchestrator | } 2026-04-06 07:02:35.068313 | orchestrator | 2026-04-06 07:02:35.068405 | orchestrator | # Ceph OSD tree 2026-04-06 07:02:35.068424 | orchestrator | 2026-04-06 07:02:35.068433 | orchestrator | + echo 2026-04-06 07:02:35.068442 | orchestrator | + echo '# Ceph OSD tree' 2026-04-06 07:02:35.068451 | orchestrator | + echo 2026-04-06 07:02:35.068459 | orchestrator | + ceph osd df tree 2026-04-06 07:02:35.569548 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-06 07:02:35.569658 | orchestrator | -1 0.11691 - 120 GiB 8.0 GiB 7.6 GiB 45 KiB 333 MiB 112 GiB 6.63 1.00 - root default 2026-04-06 07:02:35.569674 | orchestrator | -3 0.03897 - 40 GiB 2.7 GiB 2.5 GiB 15 KiB 112 MiB 37 GiB 6.64 1.00 - host testbed-node-3 2026-04-06 07:02:35.569686 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 9 KiB 58 MiB 19 GiB 6.62 1.00 209 up osd.1 2026-04-06 07:02:35.569697 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 6 KiB 54 MiB 19 GiB 6.65 1.00 181 up osd.3 2026-04-06 07:02:35.569708 | orchestrator | -5 0.03897 - 40 GiB 2.7 GiB 2.5 GiB 15 KiB 117 MiB 37 GiB 6.65 1.00 - host testbed-node-4 2026-04-06 07:02:35.569718 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 7 KiB 58 MiB 19 GiB 6.19 0.93 190 up osd.0 2026-04-06 07:02:35.569729 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 8 KiB 58 MiB 19 GiB 7.11 1.07 202 up osd.4 2026-04-06 07:02:35.569740 | orchestrator | -7 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 104 MiB 37 GiB 6.62 1.00 - host testbed-node-5 2026-04-06 07:02:35.569750 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 6 KiB 50 MiB 19 GiB 6.71 1.01 191 up osd.2 2026-04-06 07:02:35.569761 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 9 KiB 54 MiB 19 GiB 6.53 0.98 197 up osd.5 2026-04-06 07:02:35.569771 | orchestrator | TOTAL 120 GiB 8.0 GiB 7.6 GiB 47 KiB 333 MiB 112 GiB 6.63 2026-04-06 07:02:35.569783 | orchestrator | MIN/MAX VAR: 0.93/1.07 STDDEV: 0.27 2026-04-06 07:02:35.613446 | orchestrator | 2026-04-06 07:02:35.613536 | orchestrator | # Ceph monitor status 2026-04-06 07:02:35.613550 | orchestrator | 2026-04-06 07:02:35.613561 | orchestrator | + echo 2026-04-06 07:02:35.613571 | orchestrator | + echo '# Ceph monitor status' 2026-04-06 07:02:35.613582 | orchestrator | + echo 2026-04-06 07:02:35.613591 | orchestrator | + ceph mon stat 2026-04-06 07:02:36.219323 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 28, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-06 07:02:36.266347 | orchestrator | 2026-04-06 07:02:36.266448 | orchestrator | # Ceph quorum status 2026-04-06 07:02:36.266464 | orchestrator | 2026-04-06 07:02:36.266476 | orchestrator | + echo 2026-04-06 07:02:36.266488 | orchestrator | + echo '# Ceph quorum status' 2026-04-06 07:02:36.266499 | orchestrator | + echo 2026-04-06 07:02:36.266760 | orchestrator | + ceph quorum_status 2026-04-06 07:02:36.266782 | orchestrator | + jq 2026-04-06 07:02:36.925156 | orchestrator | { 2026-04-06 07:02:36.925255 | orchestrator | "election_epoch": 28, 2026-04-06 07:02:36.925270 | orchestrator | "quorum": [ 2026-04-06 07:02:36.925281 | orchestrator | 0, 2026-04-06 07:02:36.925291 | orchestrator | 1, 2026-04-06 07:02:36.925300 | orchestrator | 2 2026-04-06 07:02:36.925309 | orchestrator | ], 2026-04-06 07:02:36.925343 | orchestrator | "quorum_names": [ 2026-04-06 07:02:36.925354 | orchestrator | "testbed-node-0", 2026-04-06 07:02:36.925363 | orchestrator | "testbed-node-1", 2026-04-06 07:02:36.925372 | orchestrator | "testbed-node-2" 2026-04-06 07:02:36.925381 | orchestrator | ], 2026-04-06 07:02:36.925391 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-06 07:02:36.925401 | orchestrator | "quorum_age": 6428, 2026-04-06 07:02:36.925411 | orchestrator | "features": { 2026-04-06 07:02:36.925420 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-06 07:02:36.925430 | orchestrator | "quorum_mon": [ 2026-04-06 07:02:36.925439 | orchestrator | "kraken", 2026-04-06 07:02:36.925448 | orchestrator | "luminous", 2026-04-06 07:02:36.925458 | orchestrator | "mimic", 2026-04-06 07:02:36.925467 | orchestrator | "osdmap-prune", 2026-04-06 07:02:36.925477 | orchestrator | "nautilus", 2026-04-06 07:02:36.925486 | orchestrator | "octopus", 2026-04-06 07:02:36.925495 | orchestrator | "pacific", 2026-04-06 07:02:36.925505 | orchestrator | "elector-pinging", 2026-04-06 07:02:36.925514 | orchestrator | "quincy", 2026-04-06 07:02:36.925523 | orchestrator | "reef" 2026-04-06 07:02:36.925533 | orchestrator | ] 2026-04-06 07:02:36.925542 | orchestrator | }, 2026-04-06 07:02:36.925552 | orchestrator | "monmap": { 2026-04-06 07:02:36.925561 | orchestrator | "epoch": 1, 2026-04-06 07:02:36.925571 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-06 07:02:36.925581 | orchestrator | "modified": "2026-04-06T02:57:07.260924Z", 2026-04-06 07:02:36.925590 | orchestrator | "created": "2026-04-06T02:57:07.260924Z", 2026-04-06 07:02:36.925600 | orchestrator | "min_mon_release": 18, 2026-04-06 07:02:36.925609 | orchestrator | "min_mon_release_name": "reef", 2026-04-06 07:02:36.925619 | orchestrator | "election_strategy": 1, 2026-04-06 07:02:36.925628 | orchestrator | "disallowed_leaders: ": "", 2026-04-06 07:02:36.925637 | orchestrator | "stretch_mode": false, 2026-04-06 07:02:36.925647 | orchestrator | "tiebreaker_mon": "", 2026-04-06 07:02:36.925656 | orchestrator | "removed_ranks: ": "", 2026-04-06 07:02:36.925666 | orchestrator | "features": { 2026-04-06 07:02:36.925675 | orchestrator | "persistent": [ 2026-04-06 07:02:36.925684 | orchestrator | "kraken", 2026-04-06 07:02:36.925693 | orchestrator | "luminous", 2026-04-06 07:02:36.925703 | orchestrator | "mimic", 2026-04-06 07:02:36.925712 | orchestrator | "osdmap-prune", 2026-04-06 07:02:36.925721 | orchestrator | "nautilus", 2026-04-06 07:02:36.925730 | orchestrator | "octopus", 2026-04-06 07:02:36.925740 | orchestrator | "pacific", 2026-04-06 07:02:36.925749 | orchestrator | "elector-pinging", 2026-04-06 07:02:36.925759 | orchestrator | "quincy", 2026-04-06 07:02:36.925768 | orchestrator | "reef" 2026-04-06 07:02:36.925778 | orchestrator | ], 2026-04-06 07:02:36.925787 | orchestrator | "optional": [] 2026-04-06 07:02:36.925797 | orchestrator | }, 2026-04-06 07:02:36.925806 | orchestrator | "mons": [ 2026-04-06 07:02:36.925816 | orchestrator | { 2026-04-06 07:02:36.925825 | orchestrator | "rank": 0, 2026-04-06 07:02:36.925834 | orchestrator | "name": "testbed-node-0", 2026-04-06 07:02:36.925844 | orchestrator | "public_addrs": { 2026-04-06 07:02:36.925853 | orchestrator | "addrvec": [ 2026-04-06 07:02:36.925863 | orchestrator | { 2026-04-06 07:02:36.925872 | orchestrator | "type": "v2", 2026-04-06 07:02:36.925881 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-06 07:02:36.925891 | orchestrator | "nonce": 0 2026-04-06 07:02:36.925900 | orchestrator | }, 2026-04-06 07:02:36.925910 | orchestrator | { 2026-04-06 07:02:36.925919 | orchestrator | "type": "v1", 2026-04-06 07:02:36.925928 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-06 07:02:36.925938 | orchestrator | "nonce": 0 2026-04-06 07:02:36.925947 | orchestrator | } 2026-04-06 07:02:36.925956 | orchestrator | ] 2026-04-06 07:02:36.925966 | orchestrator | }, 2026-04-06 07:02:36.925997 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-06 07:02:36.926007 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-06 07:02:36.926070 | orchestrator | "priority": 0, 2026-04-06 07:02:36.926082 | orchestrator | "weight": 0, 2026-04-06 07:02:36.926092 | orchestrator | "crush_location": "{}" 2026-04-06 07:02:36.926101 | orchestrator | }, 2026-04-06 07:02:36.926111 | orchestrator | { 2026-04-06 07:02:36.926120 | orchestrator | "rank": 1, 2026-04-06 07:02:36.926129 | orchestrator | "name": "testbed-node-1", 2026-04-06 07:02:36.926139 | orchestrator | "public_addrs": { 2026-04-06 07:02:36.926149 | orchestrator | "addrvec": [ 2026-04-06 07:02:36.926158 | orchestrator | { 2026-04-06 07:02:36.926168 | orchestrator | "type": "v2", 2026-04-06 07:02:36.926185 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-06 07:02:36.926199 | orchestrator | "nonce": 0 2026-04-06 07:02:36.926209 | orchestrator | }, 2026-04-06 07:02:36.926219 | orchestrator | { 2026-04-06 07:02:36.926229 | orchestrator | "type": "v1", 2026-04-06 07:02:36.926238 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-06 07:02:36.926248 | orchestrator | "nonce": 0 2026-04-06 07:02:36.926257 | orchestrator | } 2026-04-06 07:02:36.926267 | orchestrator | ] 2026-04-06 07:02:36.926276 | orchestrator | }, 2026-04-06 07:02:36.926286 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-06 07:02:36.926296 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-06 07:02:36.926305 | orchestrator | "priority": 0, 2026-04-06 07:02:36.926314 | orchestrator | "weight": 0, 2026-04-06 07:02:36.926324 | orchestrator | "crush_location": "{}" 2026-04-06 07:02:36.926333 | orchestrator | }, 2026-04-06 07:02:36.926343 | orchestrator | { 2026-04-06 07:02:36.926352 | orchestrator | "rank": 2, 2026-04-06 07:02:36.926362 | orchestrator | "name": "testbed-node-2", 2026-04-06 07:02:36.926371 | orchestrator | "public_addrs": { 2026-04-06 07:02:36.926381 | orchestrator | "addrvec": [ 2026-04-06 07:02:36.926390 | orchestrator | { 2026-04-06 07:02:36.926399 | orchestrator | "type": "v2", 2026-04-06 07:02:36.926409 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-06 07:02:36.926418 | orchestrator | "nonce": 0 2026-04-06 07:02:36.926428 | orchestrator | }, 2026-04-06 07:02:36.926437 | orchestrator | { 2026-04-06 07:02:36.926447 | orchestrator | "type": "v1", 2026-04-06 07:02:36.926456 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-06 07:02:36.926466 | orchestrator | "nonce": 0 2026-04-06 07:02:36.926475 | orchestrator | } 2026-04-06 07:02:36.926485 | orchestrator | ] 2026-04-06 07:02:36.926494 | orchestrator | }, 2026-04-06 07:02:36.926504 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-06 07:02:36.926513 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-06 07:02:36.926523 | orchestrator | "priority": 0, 2026-04-06 07:02:36.926532 | orchestrator | "weight": 0, 2026-04-06 07:02:36.926541 | orchestrator | "crush_location": "{}" 2026-04-06 07:02:36.926551 | orchestrator | } 2026-04-06 07:02:36.926560 | orchestrator | ] 2026-04-06 07:02:36.926570 | orchestrator | } 2026-04-06 07:02:36.926580 | orchestrator | } 2026-04-06 07:02:36.926602 | orchestrator | 2026-04-06 07:02:36.926613 | orchestrator | # Ceph free space status 2026-04-06 07:02:36.926623 | orchestrator | 2026-04-06 07:02:36.926632 | orchestrator | + echo 2026-04-06 07:02:36.926642 | orchestrator | + echo '# Ceph free space status' 2026-04-06 07:02:36.926651 | orchestrator | + echo 2026-04-06 07:02:36.926661 | orchestrator | + ceph df 2026-04-06 07:02:37.521207 | orchestrator | --- RAW STORAGE --- 2026-04-06 07:02:37.521294 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-06 07:02:37.521316 | orchestrator | hdd 120 GiB 112 GiB 8.0 GiB 8.0 GiB 6.63 2026-04-06 07:02:37.521324 | orchestrator | TOTAL 120 GiB 112 GiB 8.0 GiB 8.0 GiB 6.63 2026-04-06 07:02:37.521333 | orchestrator | 2026-04-06 07:02:37.521342 | orchestrator | --- POOLS --- 2026-04-06 07:02:37.521350 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-06 07:02:37.521360 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-06 07:02:37.521368 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-06 07:02:37.521376 | orchestrator | cephfs_metadata 3 16 11 KiB 22 120 KiB 0 35 GiB 2026-04-06 07:02:37.521384 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-06 07:02:37.521392 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-06 07:02:37.521400 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-06 07:02:37.521407 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-06 07:02:37.521425 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-06 07:02:37.521434 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-04-06 07:02:37.521442 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-06 07:02:37.521449 | orchestrator | volumes 11 32 325 MiB 267 974 MiB 0.89 35 GiB 2026-04-06 07:02:37.521475 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-04-06 07:02:37.521483 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-06 07:02:37.521491 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-06 07:02:37.593135 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-06 07:02:37.647694 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-06 07:02:37.647759 | orchestrator | + osism apply facts 2026-04-06 07:02:39.001444 | orchestrator | 2026-04-06 07:02:39 | INFO  | Prepare task for execution of facts. 2026-04-06 07:02:39.072163 | orchestrator | 2026-04-06 07:02:39 | INFO  | Task 82303c33-3fd5-4b5c-aa83-5210069d210d (facts) was prepared for execution. 2026-04-06 07:02:39.072226 | orchestrator | 2026-04-06 07:02:39 | INFO  | It takes a moment until task 82303c33-3fd5-4b5c-aa83-5210069d210d (facts) has been started and output is visible here. 2026-04-06 07:03:01.607352 | orchestrator | 2026-04-06 07:03:01.607438 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-06 07:03:01.607445 | orchestrator | 2026-04-06 07:03:01.607450 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-06 07:03:01.607455 | orchestrator | Monday 06 April 2026 07:02:44 +0000 (0:00:02.102) 0:00:02.102 ********** 2026-04-06 07:03:01.607459 | orchestrator | ok: [testbed-manager] 2026-04-06 07:03:01.607465 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:03:01.607469 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:03:01.607474 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:03:01.607477 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:03:01.607481 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:03:01.607485 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:03:01.607489 | orchestrator | 2026-04-06 07:03:01.607493 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-06 07:03:01.607497 | orchestrator | Monday 06 April 2026 07:02:47 +0000 (0:00:03.175) 0:00:05.277 ********** 2026-04-06 07:03:01.607501 | orchestrator | skipping: [testbed-manager] 2026-04-06 07:03:01.607505 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:03:01.607509 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:03:01.607513 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:03:01.607517 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:03:01.607521 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:03:01.607524 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:03:01.607528 | orchestrator | 2026-04-06 07:03:01.607532 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-06 07:03:01.607536 | orchestrator | 2026-04-06 07:03:01.607539 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-06 07:03:01.607543 | orchestrator | Monday 06 April 2026 07:02:50 +0000 (0:00:02.920) 0:00:08.198 ********** 2026-04-06 07:03:01.607547 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:03:01.607551 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:03:01.607554 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:03:01.607558 | orchestrator | ok: [testbed-manager] 2026-04-06 07:03:01.607562 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:03:01.607566 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:03:01.607569 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:03:01.607573 | orchestrator | 2026-04-06 07:03:01.607577 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-06 07:03:01.607582 | orchestrator | 2026-04-06 07:03:01.607588 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-06 07:03:01.607594 | orchestrator | Monday 06 April 2026 07:02:58 +0000 (0:00:07.318) 0:00:15.517 ********** 2026-04-06 07:03:01.607600 | orchestrator | skipping: [testbed-manager] 2026-04-06 07:03:01.607606 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:03:01.607612 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:03:01.607618 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:03:01.607624 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:03:01.607650 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:03:01.607656 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:03:01.607662 | orchestrator | 2026-04-06 07:03:01.607667 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 07:03:01.607674 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:03:01.607681 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:03:01.607687 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:03:01.607693 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:03:01.607699 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:03:01.607705 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:03:01.607711 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:03:01.607717 | orchestrator | 2026-04-06 07:03:01.607724 | orchestrator | 2026-04-06 07:03:01.607729 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 07:03:01.607746 | orchestrator | Monday 06 April 2026 07:03:01 +0000 (0:00:03.104) 0:00:18.621 ********** 2026-04-06 07:03:01.607751 | orchestrator | =============================================================================== 2026-04-06 07:03:01.607755 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.32s 2026-04-06 07:03:01.607759 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.18s 2026-04-06 07:03:01.607763 | orchestrator | Gather facts for all hosts ---------------------------------------------- 3.10s 2026-04-06 07:03:01.607767 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.92s 2026-04-06 07:03:01.797242 | orchestrator | + osism validate ceph-mons 2026-04-06 07:04:11.697767 | orchestrator | 2026-04-06 07:04:11.697884 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-06 07:04:11.697901 | orchestrator | 2026-04-06 07:04:11.697914 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-06 07:04:11.697926 | orchestrator | Monday 06 April 2026 07:03:18 +0000 (0:00:01.789) 0:00:01.789 ********** 2026-04-06 07:04:11.697937 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:04:11.697948 | orchestrator | 2026-04-06 07:04:11.697959 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-06 07:04:11.698147 | orchestrator | Monday 06 April 2026 07:03:21 +0000 (0:00:02.730) 0:00:04.520 ********** 2026-04-06 07:04:11.698169 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:04:11.698190 | orchestrator | 2026-04-06 07:04:11.698211 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-06 07:04:11.698231 | orchestrator | Monday 06 April 2026 07:03:22 +0000 (0:00:01.750) 0:00:06.270 ********** 2026-04-06 07:04:11.698252 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.698274 | orchestrator | 2026-04-06 07:04:11.698293 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-06 07:04:11.698307 | orchestrator | Monday 06 April 2026 07:03:24 +0000 (0:00:01.178) 0:00:07.449 ********** 2026-04-06 07:04:11.698321 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.698334 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:04:11.698348 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:04:11.698361 | orchestrator | 2026-04-06 07:04:11.698374 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-06 07:04:11.698411 | orchestrator | Monday 06 April 2026 07:03:25 +0000 (0:00:01.826) 0:00:09.275 ********** 2026-04-06 07:04:11.698426 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:04:11.698438 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.698451 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:04:11.698464 | orchestrator | 2026-04-06 07:04:11.698476 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-06 07:04:11.698489 | orchestrator | Monday 06 April 2026 07:03:28 +0000 (0:00:02.539) 0:00:11.815 ********** 2026-04-06 07:04:11.698503 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.698516 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:04:11.698528 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:04:11.698541 | orchestrator | 2026-04-06 07:04:11.698554 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-06 07:04:11.698568 | orchestrator | Monday 06 April 2026 07:03:29 +0000 (0:00:01.391) 0:00:13.207 ********** 2026-04-06 07:04:11.698581 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.698593 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:04:11.698605 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:04:11.698618 | orchestrator | 2026-04-06 07:04:11.698630 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 07:04:11.698660 | orchestrator | Monday 06 April 2026 07:03:31 +0000 (0:00:01.409) 0:00:14.616 ********** 2026-04-06 07:04:11.698671 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.698682 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:04:11.698692 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:04:11.698703 | orchestrator | 2026-04-06 07:04:11.698714 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-06 07:04:11.698724 | orchestrator | Monday 06 April 2026 07:03:32 +0000 (0:00:01.306) 0:00:15.923 ********** 2026-04-06 07:04:11.698735 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.698746 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:04:11.698757 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:04:11.698768 | orchestrator | 2026-04-06 07:04:11.698778 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-06 07:04:11.698789 | orchestrator | Monday 06 April 2026 07:03:34 +0000 (0:00:01.396) 0:00:17.320 ********** 2026-04-06 07:04:11.698800 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.698811 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:04:11.698822 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:04:11.698832 | orchestrator | 2026-04-06 07:04:11.698843 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 07:04:11.698854 | orchestrator | Monday 06 April 2026 07:03:35 +0000 (0:00:01.349) 0:00:18.669 ********** 2026-04-06 07:04:11.698864 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.698875 | orchestrator | 2026-04-06 07:04:11.698886 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 07:04:11.698897 | orchestrator | Monday 06 April 2026 07:03:36 +0000 (0:00:01.304) 0:00:19.973 ********** 2026-04-06 07:04:11.698907 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.698918 | orchestrator | 2026-04-06 07:04:11.698928 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 07:04:11.698939 | orchestrator | Monday 06 April 2026 07:03:37 +0000 (0:00:01.234) 0:00:21.207 ********** 2026-04-06 07:04:11.698950 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.698982 | orchestrator | 2026-04-06 07:04:11.698994 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:04:11.699005 | orchestrator | Monday 06 April 2026 07:03:39 +0000 (0:00:01.274) 0:00:22.482 ********** 2026-04-06 07:04:11.699016 | orchestrator | 2026-04-06 07:04:11.699026 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:04:11.699037 | orchestrator | Monday 06 April 2026 07:03:39 +0000 (0:00:00.442) 0:00:22.925 ********** 2026-04-06 07:04:11.699048 | orchestrator | 2026-04-06 07:04:11.699064 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:04:11.699119 | orchestrator | Monday 06 April 2026 07:03:40 +0000 (0:00:00.672) 0:00:23.597 ********** 2026-04-06 07:04:11.699132 | orchestrator | 2026-04-06 07:04:11.699143 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 07:04:11.699154 | orchestrator | Monday 06 April 2026 07:03:41 +0000 (0:00:00.818) 0:00:24.416 ********** 2026-04-06 07:04:11.699164 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.699175 | orchestrator | 2026-04-06 07:04:11.699186 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-06 07:04:11.699197 | orchestrator | Monday 06 April 2026 07:03:42 +0000 (0:00:01.262) 0:00:25.679 ********** 2026-04-06 07:04:11.699208 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.699219 | orchestrator | 2026-04-06 07:04:11.699251 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-06 07:04:11.699262 | orchestrator | Monday 06 April 2026 07:03:43 +0000 (0:00:01.293) 0:00:26.972 ********** 2026-04-06 07:04:11.699273 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.699284 | orchestrator | 2026-04-06 07:04:11.699295 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-06 07:04:11.699306 | orchestrator | Monday 06 April 2026 07:03:44 +0000 (0:00:01.140) 0:00:28.113 ********** 2026-04-06 07:04:11.699316 | orchestrator | changed: [testbed-node-0] 2026-04-06 07:04:11.699327 | orchestrator | 2026-04-06 07:04:11.699338 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-06 07:04:11.699349 | orchestrator | Monday 06 April 2026 07:03:47 +0000 (0:00:02.674) 0:00:30.787 ********** 2026-04-06 07:04:11.699360 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.699371 | orchestrator | 2026-04-06 07:04:11.699382 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-06 07:04:11.699393 | orchestrator | Monday 06 April 2026 07:03:48 +0000 (0:00:01.425) 0:00:32.213 ********** 2026-04-06 07:04:11.699403 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.699414 | orchestrator | 2026-04-06 07:04:11.699425 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-06 07:04:11.699436 | orchestrator | Monday 06 April 2026 07:03:50 +0000 (0:00:01.110) 0:00:33.324 ********** 2026-04-06 07:04:11.699447 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.699457 | orchestrator | 2026-04-06 07:04:11.699468 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-06 07:04:11.699479 | orchestrator | Monday 06 April 2026 07:03:51 +0000 (0:00:01.333) 0:00:34.658 ********** 2026-04-06 07:04:11.699490 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.699501 | orchestrator | 2026-04-06 07:04:11.699512 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-06 07:04:11.699523 | orchestrator | Monday 06 April 2026 07:03:52 +0000 (0:00:01.278) 0:00:35.936 ********** 2026-04-06 07:04:11.699533 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.699544 | orchestrator | 2026-04-06 07:04:11.699555 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-06 07:04:11.699566 | orchestrator | Monday 06 April 2026 07:03:53 +0000 (0:00:01.161) 0:00:37.097 ********** 2026-04-06 07:04:11.699577 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.699588 | orchestrator | 2026-04-06 07:04:11.699598 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-06 07:04:11.699609 | orchestrator | Monday 06 April 2026 07:03:54 +0000 (0:00:01.105) 0:00:38.203 ********** 2026-04-06 07:04:11.699620 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.699630 | orchestrator | 2026-04-06 07:04:11.699641 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-06 07:04:11.699652 | orchestrator | Monday 06 April 2026 07:03:56 +0000 (0:00:01.238) 0:00:39.442 ********** 2026-04-06 07:04:11.699662 | orchestrator | changed: [testbed-node-0] 2026-04-06 07:04:11.699673 | orchestrator | 2026-04-06 07:04:11.699684 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-06 07:04:11.699695 | orchestrator | Monday 06 April 2026 07:03:58 +0000 (0:00:02.374) 0:00:41.817 ********** 2026-04-06 07:04:11.699713 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.699723 | orchestrator | 2026-04-06 07:04:11.699734 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-06 07:04:11.699745 | orchestrator | Monday 06 April 2026 07:03:59 +0000 (0:00:01.300) 0:00:43.117 ********** 2026-04-06 07:04:11.699756 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.699766 | orchestrator | 2026-04-06 07:04:11.699777 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-06 07:04:11.699788 | orchestrator | Monday 06 April 2026 07:04:01 +0000 (0:00:01.192) 0:00:44.310 ********** 2026-04-06 07:04:11.699798 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:04:11.699809 | orchestrator | 2026-04-06 07:04:11.699820 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-06 07:04:11.699831 | orchestrator | Monday 06 April 2026 07:04:02 +0000 (0:00:01.125) 0:00:45.435 ********** 2026-04-06 07:04:11.699842 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.699852 | orchestrator | 2026-04-06 07:04:11.699863 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-06 07:04:11.699874 | orchestrator | Monday 06 April 2026 07:04:03 +0000 (0:00:01.125) 0:00:46.561 ********** 2026-04-06 07:04:11.699885 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.699895 | orchestrator | 2026-04-06 07:04:11.699906 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-06 07:04:11.699917 | orchestrator | Monday 06 April 2026 07:04:04 +0000 (0:00:01.155) 0:00:47.717 ********** 2026-04-06 07:04:11.699928 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:04:11.699938 | orchestrator | 2026-04-06 07:04:11.699949 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-06 07:04:11.699960 | orchestrator | Monday 06 April 2026 07:04:05 +0000 (0:00:01.245) 0:00:48.962 ********** 2026-04-06 07:04:11.699988 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:04:11.700000 | orchestrator | 2026-04-06 07:04:11.700010 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 07:04:11.700021 | orchestrator | Monday 06 April 2026 07:04:06 +0000 (0:00:01.245) 0:00:50.208 ********** 2026-04-06 07:04:11.700037 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:04:11.700048 | orchestrator | 2026-04-06 07:04:11.700059 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 07:04:11.700070 | orchestrator | Monday 06 April 2026 07:04:09 +0000 (0:00:02.923) 0:00:53.131 ********** 2026-04-06 07:04:11.700081 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:04:11.700092 | orchestrator | 2026-04-06 07:04:11.700103 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 07:04:11.700114 | orchestrator | Monday 06 April 2026 07:04:11 +0000 (0:00:01.526) 0:00:54.658 ********** 2026-04-06 07:04:11.700124 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:04:11.700135 | orchestrator | 2026-04-06 07:04:11.700153 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:04:18.913043 | orchestrator | Monday 06 April 2026 07:04:12 +0000 (0:00:01.299) 0:00:55.957 ********** 2026-04-06 07:04:18.913180 | orchestrator | 2026-04-06 07:04:18.913208 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:04:18.913229 | orchestrator | Monday 06 April 2026 07:04:13 +0000 (0:00:00.474) 0:00:56.432 ********** 2026-04-06 07:04:18.913248 | orchestrator | 2026-04-06 07:04:18.913267 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:04:18.913286 | orchestrator | Monday 06 April 2026 07:04:13 +0000 (0:00:00.467) 0:00:56.899 ********** 2026-04-06 07:04:18.913306 | orchestrator | 2026-04-06 07:04:18.913324 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-06 07:04:18.913337 | orchestrator | Monday 06 April 2026 07:04:14 +0000 (0:00:00.792) 0:00:57.692 ********** 2026-04-06 07:04:18.913378 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:04:18.913389 | orchestrator | 2026-04-06 07:04:18.913400 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 07:04:18.913411 | orchestrator | Monday 06 April 2026 07:04:16 +0000 (0:00:02.442) 0:01:00.135 ********** 2026-04-06 07:04:18.913421 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-06 07:04:18.913432 | orchestrator |  "msg": [ 2026-04-06 07:04:18.913445 | orchestrator |  "Validator run completed.", 2026-04-06 07:04:18.913456 | orchestrator |  "You can find the report file here:", 2026-04-06 07:04:18.913468 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-06T07:03:19+00:00-report.json", 2026-04-06 07:04:18.913483 | orchestrator |  "on the following host:", 2026-04-06 07:04:18.913501 | orchestrator |  "testbed-manager" 2026-04-06 07:04:18.913519 | orchestrator |  ] 2026-04-06 07:04:18.913539 | orchestrator | } 2026-04-06 07:04:18.913558 | orchestrator | 2026-04-06 07:04:18.913581 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 07:04:18.913603 | orchestrator | testbed-node-0 : ok=24  changed=4  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-06 07:04:18.913623 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:04:18.913641 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:04:18.913661 | orchestrator | 2026-04-06 07:04:18.913679 | orchestrator | 2026-04-06 07:04:18.913697 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 07:04:18.913716 | orchestrator | Monday 06 April 2026 07:04:18 +0000 (0:00:01.712) 0:01:01.848 ********** 2026-04-06 07:04:18.913734 | orchestrator | =============================================================================== 2026-04-06 07:04:18.913752 | orchestrator | Aggregate test results step one ----------------------------------------- 2.92s 2026-04-06 07:04:18.913767 | orchestrator | Get timestamp for report file ------------------------------------------- 2.73s 2026-04-06 07:04:18.913784 | orchestrator | Get monmap info from one mon container ---------------------------------- 2.67s 2026-04-06 07:04:18.913804 | orchestrator | Get container info ------------------------------------------------------ 2.54s 2026-04-06 07:04:18.913823 | orchestrator | Write report file ------------------------------------------------------- 2.44s 2026-04-06 07:04:18.913840 | orchestrator | Gather status data ------------------------------------------------------ 2.38s 2026-04-06 07:04:18.913857 | orchestrator | Flush handlers ---------------------------------------------------------- 1.93s 2026-04-06 07:04:18.913874 | orchestrator | Prepare test data for container existance test -------------------------- 1.83s 2026-04-06 07:04:18.913891 | orchestrator | Create report output directory ------------------------------------------ 1.75s 2026-04-06 07:04:18.913908 | orchestrator | Flush handlers ---------------------------------------------------------- 1.74s 2026-04-06 07:04:18.913925 | orchestrator | Print report file information ------------------------------------------- 1.71s 2026-04-06 07:04:18.913944 | orchestrator | Aggregate test results step two ----------------------------------------- 1.53s 2026-04-06 07:04:18.914116 | orchestrator | Set quorum test data ---------------------------------------------------- 1.43s 2026-04-06 07:04:18.914151 | orchestrator | Set test result to passed if container is existing ---------------------- 1.41s 2026-04-06 07:04:18.914171 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 1.40s 2026-04-06 07:04:18.914190 | orchestrator | Set test result to failed if container is missing ----------------------- 1.39s 2026-04-06 07:04:18.914209 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 1.35s 2026-04-06 07:04:18.914228 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 1.33s 2026-04-06 07:04:18.914247 | orchestrator | Prepare test data ------------------------------------------------------- 1.31s 2026-04-06 07:04:18.914309 | orchestrator | Aggregate test results step one ----------------------------------------- 1.30s 2026-04-06 07:04:19.102212 | orchestrator | + osism validate ceph-mgrs 2026-04-06 07:05:22.057251 | orchestrator | 2026-04-06 07:05:22.057367 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-06 07:05:22.057384 | orchestrator | 2026-04-06 07:05:22.057395 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-06 07:05:22.057407 | orchestrator | Monday 06 April 2026 07:04:35 +0000 (0:00:01.878) 0:00:01.878 ********** 2026-04-06 07:05:22.057418 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:05:22.057429 | orchestrator | 2026-04-06 07:05:22.057440 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-06 07:05:22.057452 | orchestrator | Monday 06 April 2026 07:04:38 +0000 (0:00:02.716) 0:00:04.595 ********** 2026-04-06 07:05:22.057463 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:05:22.057474 | orchestrator | 2026-04-06 07:05:22.057485 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-06 07:05:22.057495 | orchestrator | Monday 06 April 2026 07:04:40 +0000 (0:00:01.710) 0:00:06.306 ********** 2026-04-06 07:05:22.057506 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:05:22.057519 | orchestrator | 2026-04-06 07:05:22.057529 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-06 07:05:22.057540 | orchestrator | Monday 06 April 2026 07:04:41 +0000 (0:00:01.123) 0:00:07.429 ********** 2026-04-06 07:05:22.057551 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:05:22.057562 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:05:22.057573 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:05:22.057583 | orchestrator | 2026-04-06 07:05:22.057595 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-06 07:05:22.057606 | orchestrator | Monday 06 April 2026 07:04:43 +0000 (0:00:01.775) 0:00:09.205 ********** 2026-04-06 07:05:22.057617 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:05:22.057628 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:05:22.057639 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:05:22.057649 | orchestrator | 2026-04-06 07:05:22.057660 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-06 07:05:22.057671 | orchestrator | Monday 06 April 2026 07:04:45 +0000 (0:00:02.616) 0:00:11.821 ********** 2026-04-06 07:05:22.057682 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:05:22.057693 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:05:22.057704 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:05:22.057715 | orchestrator | 2026-04-06 07:05:22.057743 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-06 07:05:22.057766 | orchestrator | Monday 06 April 2026 07:04:47 +0000 (0:00:01.351) 0:00:13.173 ********** 2026-04-06 07:05:22.057777 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:05:22.057788 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:05:22.057799 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:05:22.057810 | orchestrator | 2026-04-06 07:05:22.057821 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 07:05:22.057832 | orchestrator | Monday 06 April 2026 07:04:48 +0000 (0:00:01.338) 0:00:14.512 ********** 2026-04-06 07:05:22.057843 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:05:22.057853 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:05:22.057864 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:05:22.057875 | orchestrator | 2026-04-06 07:05:22.057886 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-06 07:05:22.057897 | orchestrator | Monday 06 April 2026 07:04:49 +0000 (0:00:01.312) 0:00:15.824 ********** 2026-04-06 07:05:22.057907 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:05:22.057918 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:05:22.057929 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:05:22.057940 | orchestrator | 2026-04-06 07:05:22.057951 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-06 07:05:22.058087 | orchestrator | Monday 06 April 2026 07:04:51 +0000 (0:00:01.389) 0:00:17.214 ********** 2026-04-06 07:05:22.058102 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:05:22.058122 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:05:22.058133 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:05:22.058144 | orchestrator | 2026-04-06 07:05:22.058155 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 07:05:22.058166 | orchestrator | Monday 06 April 2026 07:04:52 +0000 (0:00:01.332) 0:00:18.547 ********** 2026-04-06 07:05:22.058177 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:05:22.058187 | orchestrator | 2026-04-06 07:05:22.058198 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 07:05:22.058209 | orchestrator | Monday 06 April 2026 07:04:53 +0000 (0:00:01.281) 0:00:19.828 ********** 2026-04-06 07:05:22.058220 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:05:22.058231 | orchestrator | 2026-04-06 07:05:22.058242 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 07:05:22.058252 | orchestrator | Monday 06 April 2026 07:04:54 +0000 (0:00:01.268) 0:00:21.096 ********** 2026-04-06 07:05:22.058263 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:05:22.058274 | orchestrator | 2026-04-06 07:05:22.058285 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:05:22.058296 | orchestrator | Monday 06 April 2026 07:04:56 +0000 (0:00:01.345) 0:00:22.441 ********** 2026-04-06 07:05:22.058306 | orchestrator | 2026-04-06 07:05:22.058317 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:05:22.058328 | orchestrator | Monday 06 April 2026 07:04:56 +0000 (0:00:00.451) 0:00:22.893 ********** 2026-04-06 07:05:22.058339 | orchestrator | 2026-04-06 07:05:22.058349 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:05:22.058360 | orchestrator | Monday 06 April 2026 07:04:57 +0000 (0:00:00.477) 0:00:23.371 ********** 2026-04-06 07:05:22.058371 | orchestrator | 2026-04-06 07:05:22.058381 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 07:05:22.058392 | orchestrator | Monday 06 April 2026 07:04:58 +0000 (0:00:00.940) 0:00:24.311 ********** 2026-04-06 07:05:22.058403 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:05:22.058413 | orchestrator | 2026-04-06 07:05:22.058424 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-06 07:05:22.058435 | orchestrator | Monday 06 April 2026 07:04:59 +0000 (0:00:01.252) 0:00:25.564 ********** 2026-04-06 07:05:22.058446 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:05:22.058457 | orchestrator | 2026-04-06 07:05:22.058486 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-06 07:05:22.058498 | orchestrator | Monday 06 April 2026 07:05:00 +0000 (0:00:01.290) 0:00:26.854 ********** 2026-04-06 07:05:22.058508 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:05:22.058519 | orchestrator | 2026-04-06 07:05:22.058530 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-06 07:05:22.058540 | orchestrator | Monday 06 April 2026 07:05:01 +0000 (0:00:01.122) 0:00:27.977 ********** 2026-04-06 07:05:22.058551 | orchestrator | changed: [testbed-node-0] 2026-04-06 07:05:22.058562 | orchestrator | 2026-04-06 07:05:22.058573 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-06 07:05:22.058583 | orchestrator | Monday 06 April 2026 07:05:04 +0000 (0:00:03.010) 0:00:30.987 ********** 2026-04-06 07:05:22.058594 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:05:22.058605 | orchestrator | 2026-04-06 07:05:22.058615 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-06 07:05:22.058626 | orchestrator | Monday 06 April 2026 07:05:06 +0000 (0:00:01.269) 0:00:32.257 ********** 2026-04-06 07:05:22.058637 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:05:22.058647 | orchestrator | 2026-04-06 07:05:22.058658 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-06 07:05:22.058677 | orchestrator | Monday 06 April 2026 07:05:07 +0000 (0:00:01.293) 0:00:33.551 ********** 2026-04-06 07:05:22.058688 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:05:22.058699 | orchestrator | 2026-04-06 07:05:22.058710 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-06 07:05:22.058721 | orchestrator | Monday 06 April 2026 07:05:08 +0000 (0:00:01.146) 0:00:34.698 ********** 2026-04-06 07:05:22.058731 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:05:22.058742 | orchestrator | 2026-04-06 07:05:22.058753 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-06 07:05:22.058764 | orchestrator | Monday 06 April 2026 07:05:09 +0000 (0:00:01.164) 0:00:35.863 ********** 2026-04-06 07:05:22.058775 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:05:22.058786 | orchestrator | 2026-04-06 07:05:22.058796 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-06 07:05:22.058807 | orchestrator | Monday 06 April 2026 07:05:11 +0000 (0:00:01.502) 0:00:37.366 ********** 2026-04-06 07:05:22.058817 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:05:22.058828 | orchestrator | 2026-04-06 07:05:22.058839 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 07:05:22.058850 | orchestrator | Monday 06 April 2026 07:05:12 +0000 (0:00:01.403) 0:00:38.769 ********** 2026-04-06 07:05:22.058861 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:05:22.058871 | orchestrator | 2026-04-06 07:05:22.058882 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 07:05:22.058893 | orchestrator | Monday 06 April 2026 07:05:14 +0000 (0:00:02.196) 0:00:40.966 ********** 2026-04-06 07:05:22.058904 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:05:22.058914 | orchestrator | 2026-04-06 07:05:22.058925 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 07:05:22.058936 | orchestrator | Monday 06 April 2026 07:05:16 +0000 (0:00:01.311) 0:00:42.278 ********** 2026-04-06 07:05:22.058946 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:05:22.058998 | orchestrator | 2026-04-06 07:05:22.059011 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:05:22.059022 | orchestrator | Monday 06 April 2026 07:05:17 +0000 (0:00:01.332) 0:00:43.610 ********** 2026-04-06 07:05:22.059033 | orchestrator | 2026-04-06 07:05:22.059043 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:05:22.059054 | orchestrator | Monday 06 April 2026 07:05:17 +0000 (0:00:00.447) 0:00:44.058 ********** 2026-04-06 07:05:22.059064 | orchestrator | 2026-04-06 07:05:22.059075 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:05:22.059086 | orchestrator | Monday 06 April 2026 07:05:18 +0000 (0:00:00.445) 0:00:44.503 ********** 2026-04-06 07:05:22.059096 | orchestrator | 2026-04-06 07:05:22.059123 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-06 07:05:22.059135 | orchestrator | Monday 06 April 2026 07:05:19 +0000 (0:00:00.817) 0:00:45.321 ********** 2026-04-06 07:05:22.059146 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-06 07:05:22.059156 | orchestrator | 2026-04-06 07:05:22.059167 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 07:05:22.059178 | orchestrator | Monday 06 April 2026 07:05:21 +0000 (0:00:02.355) 0:00:47.677 ********** 2026-04-06 07:05:22.059188 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-06 07:05:22.059199 | orchestrator |  "msg": [ 2026-04-06 07:05:22.059210 | orchestrator |  "Validator run completed.", 2026-04-06 07:05:22.059221 | orchestrator |  "You can find the report file here:", 2026-04-06 07:05:22.059232 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-06T07:04:36+00:00-report.json", 2026-04-06 07:05:22.059244 | orchestrator |  "on the following host:", 2026-04-06 07:05:22.059255 | orchestrator |  "testbed-manager" 2026-04-06 07:05:22.059273 | orchestrator |  ] 2026-04-06 07:05:22.059285 | orchestrator | } 2026-04-06 07:05:22.059296 | orchestrator | 2026-04-06 07:05:22.059306 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 07:05:22.059318 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-06 07:05:22.059331 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:05:22.059359 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:05:24.065811 | orchestrator | 2026-04-06 07:05:24.065913 | orchestrator | 2026-04-06 07:05:24.065930 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 07:05:24.065945 | orchestrator | Monday 06 April 2026 07:05:23 +0000 (0:00:02.102) 0:00:49.779 ********** 2026-04-06 07:05:24.066088 | orchestrator | =============================================================================== 2026-04-06 07:05:24.066103 | orchestrator | Gather list of mgr modules ---------------------------------------------- 3.01s 2026-04-06 07:05:24.066114 | orchestrator | Get timestamp for report file ------------------------------------------- 2.72s 2026-04-06 07:05:24.066125 | orchestrator | Get container info ------------------------------------------------------ 2.62s 2026-04-06 07:05:24.066136 | orchestrator | Write report file ------------------------------------------------------- 2.36s 2026-04-06 07:05:24.066147 | orchestrator | Aggregate test results step one ----------------------------------------- 2.20s 2026-04-06 07:05:24.066158 | orchestrator | Print report file information ------------------------------------------- 2.10s 2026-04-06 07:05:24.066169 | orchestrator | Flush handlers ---------------------------------------------------------- 1.87s 2026-04-06 07:05:24.066180 | orchestrator | Prepare test data for container existance test -------------------------- 1.78s 2026-04-06 07:05:24.066194 | orchestrator | Flush handlers ---------------------------------------------------------- 1.71s 2026-04-06 07:05:24.066211 | orchestrator | Create report output directory ------------------------------------------ 1.71s 2026-04-06 07:05:24.066222 | orchestrator | Set validation result to passed if no test failed ----------------------- 1.50s 2026-04-06 07:05:24.066233 | orchestrator | Set validation result to failed if a test failed ------------------------ 1.40s 2026-04-06 07:05:24.066244 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 1.39s 2026-04-06 07:05:24.066255 | orchestrator | Set test result to failed if container is missing ----------------------- 1.35s 2026-04-06 07:05:24.066265 | orchestrator | Aggregate test results step three --------------------------------------- 1.34s 2026-04-06 07:05:24.066276 | orchestrator | Set test result to passed if container is existing ---------------------- 1.34s 2026-04-06 07:05:24.066287 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 1.33s 2026-04-06 07:05:24.066297 | orchestrator | Aggregate test results step three --------------------------------------- 1.33s 2026-04-06 07:05:24.066308 | orchestrator | Prepare test data ------------------------------------------------------- 1.31s 2026-04-06 07:05:24.066319 | orchestrator | Aggregate test results step two ----------------------------------------- 1.31s 2026-04-06 07:05:24.261210 | orchestrator | + osism validate ceph-osds 2026-04-06 07:05:57.231077 | orchestrator | 2026-04-06 07:05:57.231186 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-06 07:05:57.231203 | orchestrator | 2026-04-06 07:05:57.231216 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-06 07:05:57.231227 | orchestrator | Monday 06 April 2026 07:05:41 +0000 (0:00:01.985) 0:00:01.985 ********** 2026-04-06 07:05:57.231239 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 07:05:57.231251 | orchestrator | 2026-04-06 07:05:57.231262 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-06 07:05:57.231273 | orchestrator | Monday 06 April 2026 07:05:44 +0000 (0:00:02.801) 0:00:04.786 ********** 2026-04-06 07:05:57.231306 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 07:05:57.231318 | orchestrator | 2026-04-06 07:05:57.231332 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-06 07:05:57.231350 | orchestrator | Monday 06 April 2026 07:05:45 +0000 (0:00:01.254) 0:00:06.041 ********** 2026-04-06 07:05:57.231369 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 07:05:57.231388 | orchestrator | 2026-04-06 07:05:57.231406 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-06 07:05:57.231424 | orchestrator | Monday 06 April 2026 07:05:47 +0000 (0:00:01.751) 0:00:07.793 ********** 2026-04-06 07:05:57.231441 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:05:57.231462 | orchestrator | 2026-04-06 07:05:57.231478 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-06 07:05:57.231495 | orchestrator | Monday 06 April 2026 07:05:48 +0000 (0:00:01.137) 0:00:08.930 ********** 2026-04-06 07:05:57.231514 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:05:57.231533 | orchestrator | 2026-04-06 07:05:57.231552 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-06 07:05:57.231572 | orchestrator | Monday 06 April 2026 07:05:49 +0000 (0:00:01.143) 0:00:10.074 ********** 2026-04-06 07:05:57.231590 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:05:57.231611 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:05:57.231629 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:05:57.231647 | orchestrator | 2026-04-06 07:05:57.231666 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-06 07:05:57.231685 | orchestrator | Monday 06 April 2026 07:05:51 +0000 (0:00:01.917) 0:00:11.992 ********** 2026-04-06 07:05:57.231705 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:05:57.231725 | orchestrator | 2026-04-06 07:05:57.231745 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-06 07:05:57.231763 | orchestrator | Monday 06 April 2026 07:05:52 +0000 (0:00:01.184) 0:00:13.176 ********** 2026-04-06 07:05:57.231782 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:05:57.231801 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:05:57.231820 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:05:57.231839 | orchestrator | 2026-04-06 07:05:57.231858 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-06 07:05:57.231879 | orchestrator | Monday 06 April 2026 07:05:53 +0000 (0:00:01.485) 0:00:14.661 ********** 2026-04-06 07:05:57.231898 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:05:57.231918 | orchestrator | 2026-04-06 07:05:57.231985 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 07:05:57.232008 | orchestrator | Monday 06 April 2026 07:05:55 +0000 (0:00:01.381) 0:00:16.043 ********** 2026-04-06 07:05:57.232027 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:05:57.232046 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:05:57.232066 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:05:57.232084 | orchestrator | 2026-04-06 07:05:57.232102 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-06 07:05:57.232121 | orchestrator | Monday 06 April 2026 07:05:56 +0000 (0:00:01.474) 0:00:17.517 ********** 2026-04-06 07:05:57.232143 | orchestrator | skipping: [testbed-node-3] => (item={'id': '00696bf0c35a5f2718aa9fca7b975858ab0bbaa1b111b2e38c4b7c6195daf413', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-06 07:05:57.232167 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cb3111f6f7d2e07ea17251ba75cb1f7d22397d90b0f55dddf09821ae038c5374', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-06 07:05:57.232187 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0c736e526d9bf657fde6ff03127fd1467b337958af7dfe73c6de7961aae5f225', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 8 minutes'})  2026-04-06 07:05:57.232221 | orchestrator | skipping: [testbed-node-3] => (item={'id': '34186f340c08d399fa0b87ff94be2f7328cd0b7d8fe38758e85b42acf4eb7cde', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-06 07:05:57.232240 | orchestrator | skipping: [testbed-node-3] => (item={'id': '965039dba4cf0aee376fdd76e573b2ff1281df968a9c66aa773d3b520e5c601e', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-04-06 07:05:57.232285 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ceefe8aab6ea5fe4815351fcff1fac1eb5f44c1eca1e6c9f9db354526494d4fd', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2026-04-06 07:05:57.232371 | orchestrator | skipping: [testbed-node-3] => (item={'id': '83ef9662a8195e42d4cf508f6866696a90178a730f0099270e8293de02703cf9', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2026-04-06 07:05:57.232396 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ca076262e8a9ca9783a08bce769da3adc08acbe0a5a7ddde8805958f39709435', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-06 07:05:57.232430 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6dd44e229e48c4455c6ffda60ee9eeb984c596934122ab9ef8dbb3ceb2990b25', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 07:05:57.232449 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8e0f1f6a35103aa5300cd072e2b5eacb4b1e57042473dc4bd4318d44c80e14ce', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 07:05:57.232468 | orchestrator | skipping: [testbed-node-3] => (item={'id': '822982ed7c496cb62c86e73475a2e00b6c4070f555d6b4ecaccc2ea338f5a17d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:05:57.232490 | orchestrator | ok: [testbed-node-3] => (item={'id': '7884ef79d264a06a655668d633433dd9700601c72e88268b7edc385c2ba4fe7b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-06 07:05:57.232510 | orchestrator | ok: [testbed-node-3] => (item={'id': '5afcf927becaacfa4a217659909e14c49bd81187a19d7778b9512d3b1bc2f8fb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-06 07:05:57.232536 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5d713ae3bdcfab7918885e1b3f98bae04bca35bd8a713572747ecc3bbd73bfeb', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:05:57.232556 | orchestrator | skipping: [testbed-node-3] => (item={'id': '457a358cb0a129191378cf57a7b8e9d19b732242e356589a2634ad323b542455', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-06 07:05:57.232576 | orchestrator | skipping: [testbed-node-3] => (item={'id': '43b76d6f44da35cf9df3ba62e3aa69a81564a493fd13aad197511973bea5f4ee', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-06 07:05:57.232607 | orchestrator | skipping: [testbed-node-3] => (item={'id': '529b403b47ced821922adcbfda8e9dac0b7d8c3d520a64dfe20f4dfb15c1e67a', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:05:57.232627 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c0e0f80454b6d696aee6eeb0d526d12f8a3d6f2d7865c3b526910399c1fbd02f', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:05:57.232646 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5402588cc68ac3906634bfac609915bde3541a222fc4c94733360cb75a820215', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:05:57.232665 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e92b5cd9fdbf1d8489853691e843b5a5dcd1d7afadaf60b8b0259abda38e0b39', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-06 07:05:57.232697 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e4f4e408016bff4815fb0a2ce1b2e83160a875c071e80bde3ba541e155bbe353', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-06 07:05:57.415582 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c4c869178a68cf79465ec6203992197f3c9cc4bc95f26446d9e3a473519dbe99', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 8 minutes'})  2026-04-06 07:05:57.415679 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5a1cb93c0b1630ee40872ae90bc2e599c31049a0963c692c58694c6368d3517e', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-06 07:05:57.415695 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8330a5a9828e7cc06d2d19db84fcd599bd7571784d8f8f354d1fb23fa05e5a97', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-06 07:05:57.415707 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f3985cac854d9a1c9f1b454676f762bb325540ff5f1293cb9d261aed24998fa8', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-06 07:05:57.415719 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8774d67099233b08c8686108e14f1fd6f0242ab493e75ba4ee374d2557ff93c3', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2026-04-06 07:05:57.415730 | orchestrator | skipping: [testbed-node-4] => (item={'id': '524b1c45d45e8a60d186d41a445d892a48e7c308ebaa1a457ee21975a5428db2', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-06 07:05:57.415759 | orchestrator | skipping: [testbed-node-4] => (item={'id': '270df4445fa2bf124dacd634d6bbd85a2e5d5ecf945fe9d86e3044ef57103562', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 07:05:57.415771 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ad3c5c5fa7452609e624499cc8e4035f5361cb8f5e4e8d0ee66c47bde985aaa', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 07:05:57.415802 | orchestrator | skipping: [testbed-node-4] => (item={'id': '152c16c94a85f0e3d49fdc5e69a20ae74b53dc3c7bbbd9504c1536e5aabc61f5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 07:05:57.415813 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6541b7c15c2b2785e102651cc24dfa2d9fc1c58eb3b178ecbd338f2373d695b5', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-06 07:05:57.415824 | orchestrator | skipping: [testbed-node-5] => (item={'id': '45685597ac4119d6b760e24bc9f15c6b3499518a8f823f861851d08adf94b5d3', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-06 07:05:57.415835 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b3a6edaa227ec09496a0e8887cb3e7f2adbab844c904da22d6a5b3fc543206f5', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 8 minutes'})  2026-04-06 07:05:57.415846 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c1d1aa080749cc0a2952443b55beb0b2891873e501a550fe801d4340d32a47a3', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 23 minutes'})  2026-04-06 07:05:57.415857 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2794a1570a8bbb323a7788b5fc05efc1b37dc807916e5a65c05ce44ff186f07', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 50 minutes (healthy)'})  2026-04-06 07:05:57.415886 | orchestrator | skipping: [testbed-node-5] => (item={'id': '562303c7808016e5712acb1dd3249e0aefe5720340807f2ceca40fb6be617145', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 58 minutes (healthy)'})  2026-04-06 07:05:57.415898 | orchestrator | skipping: [testbed-node-5] => (item={'id': '614e7c879aa0618fb6dbd8c883f5a0dd12da05e14d2706f54a931a65c3913184', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 59 minutes (healthy)'})  2026-04-06 07:05:57.415908 | orchestrator | skipping: [testbed-node-5] => (item={'id': '701502105a3d2655455fb1789592fe586e361e4d12ebb7224a195c4ebee10d6a', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-06 07:05:57.415920 | orchestrator | skipping: [testbed-node-5] => (item={'id': '10a4641025a838246aa03203444d9fa62b65ff6ecca89872ed943a4ae7ed4986', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 07:05:57.415931 | orchestrator | skipping: [testbed-node-5] => (item={'id': '546178f6e424f594156d602d925581856c8444f3aa2c7fa9ee831e08b5cef194', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 07:05:57.415942 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8ee5a462b5a89b2deea793719ce85d62029149863fb52e67ad3b1ca1c93d977f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-06 07:05:57.416013 | orchestrator | ok: [testbed-node-5] => (item={'id': '97c12f7b989e58f6f2f70c7db8dee4ec6168b9d729497f87dda80adaca0d6d35', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-06 07:05:57.416037 | orchestrator | ok: [testbed-node-5] => (item={'id': '53cdb0fd8e2cee250c9205c305501af8766af5800f7fa5c13cdea648de823f3c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-06 07:05:57.416048 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8533bb30f52b21c030617850d0f1ccbee28954d7d5a934eddb9ebfade27a360f', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:05:57.416059 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f8fa6e169973478b04a980f85af0da483d0ae01054e687ea045fcdd85d485621', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-06 07:05:57.416082 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a8fa979c4224ad770d9180d57f5551928ca9d43849772cfdebd32c939f90eb28', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-06 07:05:57.416094 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8695149ee4e5f5fb676d14504347992c78267982438a8f476fc4fb42d4e604ac', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:05:57.416114 | orchestrator | skipping: [testbed-node-5] => (item={'id': '55056abffe31d9e528ab30ad187f2d35e11f4ca56a51249d1982d70bb68940ec', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:05:57.416126 | orchestrator | ok: [testbed-node-4] => (item={'id': '677dc6c81923a1e4eb12ed0ccf4973ac187c09c02688c9dabe5e42092d2713ea', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-06 07:05:57.416137 | orchestrator | skipping: [testbed-node-5] => (item={'id': '68713b8580250eb1888ff137976b1ab63551181a49df30e71c312adb18cf8d69', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:05:57.416156 | orchestrator | ok: [testbed-node-4] => (item={'id': '62155f1f8ac962bc9f3f14824b834086aaf94c6537374e5a08b57d555daf289e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-06 07:06:34.412750 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a5b2f3211670075df93cc51f4ed47a0976d1e3170dbdd8053928af64576ed931', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:06:34.412867 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c760566a657c636490887cd10cce77cff9a17fda1925642e1ec7bb4ebf88f394', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-06 07:06:34.412884 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c44ad5e5790fb868dd3d0a73eb6ea6e9a42525cf0302b3267bce1db479db70cc', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 2 hours (healthy)'})  2026-04-06 07:06:34.412898 | orchestrator | skipping: [testbed-node-4] => (item={'id': '44fa6d33629d469172716caa7066427c59b579e4fa7b1129a491ed6e9bde6467', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:06:34.412911 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c024662f86a7c0a7084643b2adf5db6397116b582c3bad0ac1b2bd2bf460097b', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:06:34.412944 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0dc1f56c786fddae4810263937fcd9928cd65c721765a88b34c89104dc4ba4fb', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-06 07:06:34.413005 | orchestrator | 2026-04-06 07:06:34.413032 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-06 07:06:34.413045 | orchestrator | Monday 06 April 2026 07:05:58 +0000 (0:00:01.796) 0:00:19.313 ********** 2026-04-06 07:06:34.413056 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.413069 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:06:34.413079 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:06:34.413090 | orchestrator | 2026-04-06 07:06:34.413101 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-06 07:06:34.413112 | orchestrator | Monday 06 April 2026 07:05:59 +0000 (0:00:01.398) 0:00:20.711 ********** 2026-04-06 07:06:34.413123 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:06:34.413134 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:06:34.413145 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:06:34.413155 | orchestrator | 2026-04-06 07:06:34.413167 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-06 07:06:34.413178 | orchestrator | Monday 06 April 2026 07:06:01 +0000 (0:00:01.411) 0:00:22.123 ********** 2026-04-06 07:06:34.413189 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.413199 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:06:34.413210 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:06:34.413220 | orchestrator | 2026-04-06 07:06:34.413231 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 07:06:34.413242 | orchestrator | Monday 06 April 2026 07:06:02 +0000 (0:00:01.357) 0:00:23.481 ********** 2026-04-06 07:06:34.413252 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.413263 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:06:34.413274 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:06:34.413288 | orchestrator | 2026-04-06 07:06:34.413300 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-06 07:06:34.413313 | orchestrator | Monday 06 April 2026 07:06:04 +0000 (0:00:01.398) 0:00:24.879 ********** 2026-04-06 07:06:34.413326 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-06 07:06:34.413340 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-06 07:06:34.413353 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:06:34.413365 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-06 07:06:34.413378 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-06 07:06:34.413392 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:06:34.413405 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-06 07:06:34.413418 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-06 07:06:34.413429 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:06:34.413440 | orchestrator | 2026-04-06 07:06:34.413450 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-06 07:06:34.413461 | orchestrator | Monday 06 April 2026 07:06:05 +0000 (0:00:01.353) 0:00:26.232 ********** 2026-04-06 07:06:34.413472 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.413482 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:06:34.413493 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:06:34.413504 | orchestrator | 2026-04-06 07:06:34.413514 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-06 07:06:34.413525 | orchestrator | Monday 06 April 2026 07:06:06 +0000 (0:00:01.348) 0:00:27.581 ********** 2026-04-06 07:06:34.413562 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:06:34.413574 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:06:34.413585 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:06:34.413596 | orchestrator | 2026-04-06 07:06:34.413607 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-06 07:06:34.413617 | orchestrator | Monday 06 April 2026 07:06:08 +0000 (0:00:01.520) 0:00:29.101 ********** 2026-04-06 07:06:34.413628 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:06:34.413639 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:06:34.413649 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:06:34.413660 | orchestrator | 2026-04-06 07:06:34.413671 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-06 07:06:34.413681 | orchestrator | Monday 06 April 2026 07:06:09 +0000 (0:00:01.353) 0:00:30.454 ********** 2026-04-06 07:06:34.413692 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.413703 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:06:34.413714 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:06:34.413724 | orchestrator | 2026-04-06 07:06:34.413735 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 07:06:34.413745 | orchestrator | Monday 06 April 2026 07:06:11 +0000 (0:00:01.365) 0:00:31.819 ********** 2026-04-06 07:06:34.413756 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:06:34.413767 | orchestrator | 2026-04-06 07:06:34.413777 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 07:06:34.413788 | orchestrator | Monday 06 April 2026 07:06:12 +0000 (0:00:01.276) 0:00:33.096 ********** 2026-04-06 07:06:34.413799 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:06:34.413810 | orchestrator | 2026-04-06 07:06:34.413820 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 07:06:34.413831 | orchestrator | Monday 06 April 2026 07:06:13 +0000 (0:00:01.353) 0:00:34.450 ********** 2026-04-06 07:06:34.413842 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:06:34.413852 | orchestrator | 2026-04-06 07:06:34.413863 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:06:34.413874 | orchestrator | Monday 06 April 2026 07:06:15 +0000 (0:00:01.552) 0:00:36.002 ********** 2026-04-06 07:06:34.413884 | orchestrator | 2026-04-06 07:06:34.413895 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:06:34.413905 | orchestrator | Monday 06 April 2026 07:06:15 +0000 (0:00:00.600) 0:00:36.603 ********** 2026-04-06 07:06:34.413915 | orchestrator | 2026-04-06 07:06:34.413926 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:06:34.413942 | orchestrator | Monday 06 April 2026 07:06:16 +0000 (0:00:00.419) 0:00:37.023 ********** 2026-04-06 07:06:34.413969 | orchestrator | 2026-04-06 07:06:34.413980 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 07:06:34.413991 | orchestrator | Monday 06 April 2026 07:06:17 +0000 (0:00:00.813) 0:00:37.836 ********** 2026-04-06 07:06:34.414002 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:06:34.414012 | orchestrator | 2026-04-06 07:06:34.414086 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-06 07:06:34.414098 | orchestrator | Monday 06 April 2026 07:06:18 +0000 (0:00:01.280) 0:00:39.116 ********** 2026-04-06 07:06:34.414109 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:06:34.414120 | orchestrator | 2026-04-06 07:06:34.414130 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 07:06:34.414141 | orchestrator | Monday 06 April 2026 07:06:19 +0000 (0:00:01.252) 0:00:40.369 ********** 2026-04-06 07:06:34.414152 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.414163 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:06:34.414173 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:06:34.414184 | orchestrator | 2026-04-06 07:06:34.414195 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-06 07:06:34.414206 | orchestrator | Monday 06 April 2026 07:06:21 +0000 (0:00:01.434) 0:00:41.804 ********** 2026-04-06 07:06:34.414224 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.414235 | orchestrator | 2026-04-06 07:06:34.414246 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-06 07:06:34.414256 | orchestrator | Monday 06 April 2026 07:06:22 +0000 (0:00:01.227) 0:00:43.032 ********** 2026-04-06 07:06:34.414267 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-06 07:06:34.414278 | orchestrator | 2026-04-06 07:06:34.414288 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-06 07:06:34.414299 | orchestrator | Monday 06 April 2026 07:06:25 +0000 (0:00:03.655) 0:00:46.687 ********** 2026-04-06 07:06:34.414310 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.414321 | orchestrator | 2026-04-06 07:06:34.414331 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-06 07:06:34.414342 | orchestrator | Monday 06 April 2026 07:06:27 +0000 (0:00:01.147) 0:00:47.835 ********** 2026-04-06 07:06:34.414353 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.414364 | orchestrator | 2026-04-06 07:06:34.414374 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-06 07:06:34.414385 | orchestrator | Monday 06 April 2026 07:06:28 +0000 (0:00:01.301) 0:00:49.136 ********** 2026-04-06 07:06:34.414395 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:06:34.414406 | orchestrator | 2026-04-06 07:06:34.414417 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-06 07:06:34.414427 | orchestrator | Monday 06 April 2026 07:06:29 +0000 (0:00:01.133) 0:00:50.269 ********** 2026-04-06 07:06:34.414438 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.414449 | orchestrator | 2026-04-06 07:06:34.414460 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 07:06:34.414470 | orchestrator | Monday 06 April 2026 07:06:30 +0000 (0:00:01.104) 0:00:51.374 ********** 2026-04-06 07:06:34.414481 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:06:34.414492 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:06:34.414503 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:06:34.414513 | orchestrator | 2026-04-06 07:06:34.414524 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-06 07:06:34.414535 | orchestrator | Monday 06 April 2026 07:06:31 +0000 (0:00:01.338) 0:00:52.713 ********** 2026-04-06 07:06:34.414546 | orchestrator | changed: [testbed-node-3] 2026-04-06 07:06:34.414557 | orchestrator | changed: [testbed-node-4] 2026-04-06 07:06:34.414575 | orchestrator | changed: [testbed-node-5] 2026-04-06 07:07:06.421652 | orchestrator | 2026-04-06 07:07:06.421759 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-06 07:07:06.421775 | orchestrator | Monday 06 April 2026 07:06:35 +0000 (0:00:03.589) 0:00:56.303 ********** 2026-04-06 07:07:06.421787 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:07:06.421800 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:07:06.421839 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:07:06.421852 | orchestrator | 2026-04-06 07:07:06.421864 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-06 07:07:06.421875 | orchestrator | Monday 06 April 2026 07:06:37 +0000 (0:00:01.500) 0:00:57.804 ********** 2026-04-06 07:07:06.421886 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:07:06.421897 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:07:06.421908 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:07:06.421919 | orchestrator | 2026-04-06 07:07:06.421930 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-06 07:07:06.421941 | orchestrator | Monday 06 April 2026 07:06:38 +0000 (0:00:01.507) 0:00:59.311 ********** 2026-04-06 07:07:06.421998 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:07:06.422011 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:07:06.422085 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:07:06.422098 | orchestrator | 2026-04-06 07:07:06.422119 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-06 07:07:06.422130 | orchestrator | Monday 06 April 2026 07:06:39 +0000 (0:00:01.328) 0:01:00.640 ********** 2026-04-06 07:07:06.422164 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:07:06.422177 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:07:06.422187 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:07:06.422198 | orchestrator | 2026-04-06 07:07:06.422209 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-06 07:07:06.422220 | orchestrator | Monday 06 April 2026 07:06:41 +0000 (0:00:02.084) 0:01:02.724 ********** 2026-04-06 07:07:06.422231 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:07:06.422242 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:07:06.422253 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:07:06.422263 | orchestrator | 2026-04-06 07:07:06.422274 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-06 07:07:06.422285 | orchestrator | Monday 06 April 2026 07:06:43 +0000 (0:00:01.525) 0:01:04.250 ********** 2026-04-06 07:07:06.422296 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:07:06.422307 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:07:06.422317 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:07:06.422328 | orchestrator | 2026-04-06 07:07:06.422352 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-06 07:07:06.422363 | orchestrator | Monday 06 April 2026 07:06:44 +0000 (0:00:01.323) 0:01:05.574 ********** 2026-04-06 07:07:06.422374 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:07:06.422385 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:07:06.422396 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:07:06.422406 | orchestrator | 2026-04-06 07:07:06.422417 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-06 07:07:06.422428 | orchestrator | Monday 06 April 2026 07:06:46 +0000 (0:00:01.555) 0:01:07.129 ********** 2026-04-06 07:07:06.422439 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:07:06.422449 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:07:06.422460 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:07:06.422471 | orchestrator | 2026-04-06 07:07:06.422482 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-06 07:07:06.422493 | orchestrator | Monday 06 April 2026 07:06:48 +0000 (0:00:01.769) 0:01:08.899 ********** 2026-04-06 07:07:06.422504 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:07:06.422515 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:07:06.422525 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:07:06.422536 | orchestrator | 2026-04-06 07:07:06.422547 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-06 07:07:06.422558 | orchestrator | Monday 06 April 2026 07:06:49 +0000 (0:00:01.363) 0:01:10.262 ********** 2026-04-06 07:07:06.422568 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:07:06.422579 | orchestrator | skipping: [testbed-node-4] 2026-04-06 07:07:06.422590 | orchestrator | skipping: [testbed-node-5] 2026-04-06 07:07:06.422601 | orchestrator | 2026-04-06 07:07:06.422611 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-06 07:07:06.422622 | orchestrator | Monday 06 April 2026 07:06:50 +0000 (0:00:01.343) 0:01:11.606 ********** 2026-04-06 07:07:06.422633 | orchestrator | ok: [testbed-node-3] 2026-04-06 07:07:06.422644 | orchestrator | ok: [testbed-node-4] 2026-04-06 07:07:06.422654 | orchestrator | ok: [testbed-node-5] 2026-04-06 07:07:06.422665 | orchestrator | 2026-04-06 07:07:06.422676 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-06 07:07:06.422686 | orchestrator | Monday 06 April 2026 07:06:52 +0000 (0:00:01.331) 0:01:12.937 ********** 2026-04-06 07:07:06.422697 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 07:07:06.422708 | orchestrator | 2026-04-06 07:07:06.422719 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-06 07:07:06.422730 | orchestrator | Monday 06 April 2026 07:06:53 +0000 (0:00:01.469) 0:01:14.407 ********** 2026-04-06 07:07:06.422741 | orchestrator | skipping: [testbed-node-3] 2026-04-06 07:07:06.422751 | orchestrator | 2026-04-06 07:07:06.422762 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-06 07:07:06.422781 | orchestrator | Monday 06 April 2026 07:06:54 +0000 (0:00:01.296) 0:01:15.704 ********** 2026-04-06 07:07:06.422791 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 07:07:06.422802 | orchestrator | 2026-04-06 07:07:06.422813 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-06 07:07:06.422824 | orchestrator | Monday 06 April 2026 07:06:57 +0000 (0:00:02.822) 0:01:18.527 ********** 2026-04-06 07:07:06.422835 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 07:07:06.422846 | orchestrator | 2026-04-06 07:07:06.422857 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-06 07:07:06.422868 | orchestrator | Monday 06 April 2026 07:06:59 +0000 (0:00:01.291) 0:01:19.818 ********** 2026-04-06 07:07:06.422879 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 07:07:06.422890 | orchestrator | 2026-04-06 07:07:06.422920 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:07:06.422932 | orchestrator | Monday 06 April 2026 07:07:00 +0000 (0:00:01.268) 0:01:21.087 ********** 2026-04-06 07:07:06.422942 | orchestrator | 2026-04-06 07:07:06.423000 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:07:06.423012 | orchestrator | Monday 06 April 2026 07:07:00 +0000 (0:00:00.452) 0:01:21.539 ********** 2026-04-06 07:07:06.423022 | orchestrator | 2026-04-06 07:07:06.423033 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-06 07:07:06.423047 | orchestrator | Monday 06 April 2026 07:07:01 +0000 (0:00:00.456) 0:01:21.996 ********** 2026-04-06 07:07:06.423066 | orchestrator | 2026-04-06 07:07:06.423085 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-06 07:07:06.423103 | orchestrator | Monday 06 April 2026 07:07:02 +0000 (0:00:00.829) 0:01:22.826 ********** 2026-04-06 07:07:06.423121 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-06 07:07:06.423139 | orchestrator | 2026-04-06 07:07:06.423156 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-06 07:07:06.423174 | orchestrator | Monday 06 April 2026 07:07:04 +0000 (0:00:02.303) 0:01:25.129 ********** 2026-04-06 07:07:06.423190 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-06 07:07:06.423207 | orchestrator |  "msg": [ 2026-04-06 07:07:06.423226 | orchestrator |  "Validator run completed.", 2026-04-06 07:07:06.423244 | orchestrator |  "You can find the report file here:", 2026-04-06 07:07:06.423264 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-06T07:05:42+00:00-report.json", 2026-04-06 07:07:06.423286 | orchestrator |  "on the following host:", 2026-04-06 07:07:06.423304 | orchestrator |  "testbed-manager" 2026-04-06 07:07:06.423323 | orchestrator |  ] 2026-04-06 07:07:06.423335 | orchestrator | } 2026-04-06 07:07:06.423347 | orchestrator | 2026-04-06 07:07:06.423357 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 07:07:06.423370 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-06 07:07:06.423382 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-06 07:07:06.423401 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-06 07:07:06.423413 | orchestrator | 2026-04-06 07:07:06.423424 | orchestrator | 2026-04-06 07:07:06.423435 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 07:07:06.423446 | orchestrator | Monday 06 April 2026 07:07:06 +0000 (0:00:01.796) 0:01:26.926 ********** 2026-04-06 07:07:06.423456 | orchestrator | =============================================================================== 2026-04-06 07:07:06.423477 | orchestrator | Get ceph osd tree ------------------------------------------------------- 3.66s 2026-04-06 07:07:06.423488 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 3.59s 2026-04-06 07:07:06.423499 | orchestrator | Aggregate test results step one ----------------------------------------- 2.82s 2026-04-06 07:07:06.423510 | orchestrator | Get timestamp for report file ------------------------------------------- 2.80s 2026-04-06 07:07:06.423521 | orchestrator | Write report file ------------------------------------------------------- 2.30s 2026-04-06 07:07:06.423532 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 2.09s 2026-04-06 07:07:06.423543 | orchestrator | Calculate OSD devices for each host ------------------------------------- 1.92s 2026-04-06 07:07:06.423554 | orchestrator | Flush handlers ---------------------------------------------------------- 1.83s 2026-04-06 07:07:06.423565 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 1.80s 2026-04-06 07:07:06.423576 | orchestrator | Print report file information ------------------------------------------- 1.80s 2026-04-06 07:07:06.423587 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 1.77s 2026-04-06 07:07:06.423598 | orchestrator | Create report output directory ------------------------------------------ 1.75s 2026-04-06 07:07:06.423609 | orchestrator | Flush handlers ---------------------------------------------------------- 1.74s 2026-04-06 07:07:06.423620 | orchestrator | Prepare test data ------------------------------------------------------- 1.56s 2026-04-06 07:07:06.423631 | orchestrator | Aggregate test results step three --------------------------------------- 1.55s 2026-04-06 07:07:06.423642 | orchestrator | Fail if count of unencrypted OSDs does not match ------------------------ 1.53s 2026-04-06 07:07:06.423653 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 1.52s 2026-04-06 07:07:06.423664 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 1.51s 2026-04-06 07:07:06.423675 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 1.50s 2026-04-06 07:07:06.423686 | orchestrator | Calculate OSD devices for each host ------------------------------------- 1.48s 2026-04-06 07:07:06.613929 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-06 07:07:06.621142 | orchestrator | + set -e 2026-04-06 07:07:06.621283 | orchestrator | + source /opt/manager-vars.sh 2026-04-06 07:07:06.621300 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-06 07:07:06.621312 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-06 07:07:06.621323 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-06 07:07:06.621334 | orchestrator | ++ CEPH_VERSION=reef 2026-04-06 07:07:06.621345 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-06 07:07:06.621357 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-06 07:07:06.621368 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-06 07:07:06.621379 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-06 07:07:06.621391 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-06 07:07:06.621402 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-06 07:07:06.621412 | orchestrator | ++ export ARA=false 2026-04-06 07:07:06.621423 | orchestrator | ++ ARA=false 2026-04-06 07:07:06.621434 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-06 07:07:06.621445 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-06 07:07:06.621456 | orchestrator | ++ export TEMPEST=false 2026-04-06 07:07:06.621467 | orchestrator | ++ TEMPEST=false 2026-04-06 07:07:06.621477 | orchestrator | ++ export IS_ZUUL=true 2026-04-06 07:07:06.621488 | orchestrator | ++ IS_ZUUL=true 2026-04-06 07:07:06.621499 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 07:07:06.621510 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.235 2026-04-06 07:07:06.621521 | orchestrator | ++ export EXTERNAL_API=false 2026-04-06 07:07:06.621532 | orchestrator | ++ EXTERNAL_API=false 2026-04-06 07:07:06.621542 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-06 07:07:06.621553 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-06 07:07:06.621564 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-06 07:07:06.621574 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-06 07:07:06.621585 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-06 07:07:06.621596 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-06 07:07:06.621607 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-06 07:07:06.621617 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-06 07:07:06.621628 | orchestrator | + source /etc/os-release 2026-04-06 07:07:06.621663 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-06 07:07:06.621675 | orchestrator | ++ NAME=Ubuntu 2026-04-06 07:07:06.621685 | orchestrator | ++ VERSION_ID=24.04 2026-04-06 07:07:06.621696 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-06 07:07:06.621707 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-06 07:07:06.621717 | orchestrator | ++ ID=ubuntu 2026-04-06 07:07:06.621728 | orchestrator | ++ ID_LIKE=debian 2026-04-06 07:07:06.621739 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-06 07:07:06.621749 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-06 07:07:06.621760 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-06 07:07:06.621771 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-06 07:07:06.621783 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-06 07:07:06.621794 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-06 07:07:06.621813 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-06 07:07:06.621826 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-06 07:07:06.621838 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-06 07:07:06.646582 | orchestrator | 2026-04-06 07:07:06.646648 | orchestrator | # Status of Elasticsearch 2026-04-06 07:07:06.646657 | orchestrator | 2026-04-06 07:07:06.646665 | orchestrator | + pushd /opt/configuration/contrib 2026-04-06 07:07:06.646673 | orchestrator | + echo 2026-04-06 07:07:06.646680 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-06 07:07:06.646687 | orchestrator | + echo 2026-04-06 07:07:06.646694 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-06 07:07:06.832073 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-06 07:07:06.832171 | orchestrator | 2026-04-06 07:07:06.832187 | orchestrator | # Status of MariaDB 2026-04-06 07:07:06.832200 | orchestrator | 2026-04-06 07:07:06.832212 | orchestrator | + echo 2026-04-06 07:07:06.832223 | orchestrator | + echo '# Status of MariaDB' 2026-04-06 07:07:06.832234 | orchestrator | + echo 2026-04-06 07:07:06.832916 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-06 07:07:06.900237 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-06 07:07:06.900304 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-06 07:07:06.900317 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-06 07:07:06.900330 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-06 07:07:07.084259 | orchestrator | Reading package lists... 2026-04-06 07:07:07.461806 | orchestrator | Building dependency tree... 2026-04-06 07:07:07.463678 | orchestrator | Reading state information... 2026-04-06 07:07:07.921567 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-06 07:07:07.921664 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-04-06 07:07:08.618901 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-06 07:07:08.620275 | orchestrator | 2026-04-06 07:07:08.620349 | orchestrator | # Status of Prometheus 2026-04-06 07:07:08.620363 | orchestrator | 2026-04-06 07:07:08.620374 | orchestrator | + echo 2026-04-06 07:07:08.620385 | orchestrator | + echo '# Status of Prometheus' 2026-04-06 07:07:08.620395 | orchestrator | + echo 2026-04-06 07:07:08.620405 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-06 07:07:08.684102 | orchestrator | Unauthorized 2026-04-06 07:07:08.688121 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-06 07:07:08.746646 | orchestrator | Unauthorized 2026-04-06 07:07:08.750203 | orchestrator | 2026-04-06 07:07:08.750258 | orchestrator | # Status of RabbitMQ 2026-04-06 07:07:08.750271 | orchestrator | 2026-04-06 07:07:08.750283 | orchestrator | + echo 2026-04-06 07:07:08.750295 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-06 07:07:08.750306 | orchestrator | + echo 2026-04-06 07:07:08.750813 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-06 07:07:08.813344 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-06 07:07:08.813463 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-06 07:07:08.813492 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-06 07:07:09.355137 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-06 07:07:09.363638 | orchestrator | 2026-04-06 07:07:09.363752 | orchestrator | # Status of Redis 2026-04-06 07:07:09.363775 | orchestrator | 2026-04-06 07:07:09.363792 | orchestrator | + echo 2026-04-06 07:07:09.363808 | orchestrator | + echo '# Status of Redis' 2026-04-06 07:07:09.363825 | orchestrator | + echo 2026-04-06 07:07:09.363843 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-06 07:07:09.373191 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001471s;;;0.000000;10.000000 2026-04-06 07:07:09.373701 | orchestrator | 2026-04-06 07:07:09.373726 | orchestrator | # Create backup of MariaDB database 2026-04-06 07:07:09.373737 | orchestrator | 2026-04-06 07:07:09.373747 | orchestrator | + popd 2026-04-06 07:07:09.373757 | orchestrator | + echo 2026-04-06 07:07:09.373767 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-06 07:07:09.373777 | orchestrator | + echo 2026-04-06 07:07:09.373787 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-06 07:07:10.725525 | orchestrator | 2026-04-06 07:07:10 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-06 07:07:10.792853 | orchestrator | 2026-04-06 07:07:10 | INFO  | Task 49d39cd5-4181-42f3-8c44-945556c6e5ec (mariadb_backup) was prepared for execution. 2026-04-06 07:07:10.792993 | orchestrator | 2026-04-06 07:07:10 | INFO  | It takes a moment until task 49d39cd5-4181-42f3-8c44-945556c6e5ec (mariadb_backup) has been started and output is visible here. 2026-04-06 07:07:47.008932 | orchestrator | 2026-04-06 07:07:47.009102 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-06 07:07:47.009119 | orchestrator | 2026-04-06 07:07:47.009131 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-06 07:07:47.009143 | orchestrator | Monday 06 April 2026 07:07:15 +0000 (0:00:01.449) 0:00:01.449 ********** 2026-04-06 07:07:47.009155 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:07:47.009167 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:07:47.009178 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:07:47.009189 | orchestrator | 2026-04-06 07:07:47.009200 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-06 07:07:47.009229 | orchestrator | Monday 06 April 2026 07:07:17 +0000 (0:00:01.845) 0:00:03.294 ********** 2026-04-06 07:07:47.009241 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-06 07:07:47.009252 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-06 07:07:47.009264 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-06 07:07:47.009275 | orchestrator | 2026-04-06 07:07:47.009286 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-06 07:07:47.009297 | orchestrator | 2026-04-06 07:07:47.009308 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-06 07:07:47.009319 | orchestrator | Monday 06 April 2026 07:07:20 +0000 (0:00:03.204) 0:00:06.499 ********** 2026-04-06 07:07:47.009330 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-06 07:07:47.009342 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-06 07:07:47.009352 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-06 07:07:47.009363 | orchestrator | 2026-04-06 07:07:47.009374 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-06 07:07:47.009386 | orchestrator | Monday 06 April 2026 07:07:22 +0000 (0:00:01.930) 0:00:08.429 ********** 2026-04-06 07:07:47.009397 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-06 07:07:47.009409 | orchestrator | 2026-04-06 07:07:47.009420 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-06 07:07:47.009431 | orchestrator | Monday 06 April 2026 07:07:24 +0000 (0:00:01.850) 0:00:10.279 ********** 2026-04-06 07:07:47.009442 | orchestrator | ok: [testbed-node-0] 2026-04-06 07:07:47.009453 | orchestrator | ok: [testbed-node-1] 2026-04-06 07:07:47.009466 | orchestrator | ok: [testbed-node-2] 2026-04-06 07:07:47.009498 | orchestrator | 2026-04-06 07:07:47.009513 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-06 07:07:47.009527 | orchestrator | Monday 06 April 2026 07:07:29 +0000 (0:00:04.782) 0:00:15.062 ********** 2026-04-06 07:07:47.009541 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:07:47.009556 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:07:47.009569 | orchestrator | changed: [testbed-node-0] 2026-04-06 07:07:47.009587 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-06 07:07:47.009601 | orchestrator | 2026-04-06 07:07:47.009614 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-06 07:07:47.009628 | orchestrator | skipping: no hosts matched 2026-04-06 07:07:47.009641 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-06 07:07:47.009654 | orchestrator | 2026-04-06 07:07:47.009667 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-06 07:07:47.009681 | orchestrator | skipping: no hosts matched 2026-04-06 07:07:47.009695 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-06 07:07:47.009708 | orchestrator | mariadb_bootstrap_restart 2026-04-06 07:07:47.009721 | orchestrator | 2026-04-06 07:07:47.009735 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-06 07:07:47.009748 | orchestrator | skipping: no hosts matched 2026-04-06 07:07:47.009761 | orchestrator | 2026-04-06 07:07:47.009774 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-06 07:07:47.009788 | orchestrator | 2026-04-06 07:07:47.009800 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-06 07:07:47.009814 | orchestrator | Monday 06 April 2026 07:07:43 +0000 (0:00:14.379) 0:00:29.442 ********** 2026-04-06 07:07:47.009827 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:07:47.009840 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:07:47.009853 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:07:47.009864 | orchestrator | 2026-04-06 07:07:47.009875 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-06 07:07:47.009885 | orchestrator | Monday 06 April 2026 07:07:44 +0000 (0:00:01.313) 0:00:30.755 ********** 2026-04-06 07:07:47.009896 | orchestrator | skipping: [testbed-node-0] 2026-04-06 07:07:47.009907 | orchestrator | skipping: [testbed-node-1] 2026-04-06 07:07:47.009917 | orchestrator | skipping: [testbed-node-2] 2026-04-06 07:07:47.009928 | orchestrator | 2026-04-06 07:07:47.009939 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 07:07:47.010010 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-06 07:07:47.010087 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 07:07:47.010100 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-06 07:07:47.010111 | orchestrator | 2026-04-06 07:07:47.010122 | orchestrator | 2026-04-06 07:07:47.010133 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 07:07:47.010144 | orchestrator | Monday 06 April 2026 07:07:46 +0000 (0:00:01.755) 0:00:32.511 ********** 2026-04-06 07:07:47.010155 | orchestrator | =============================================================================== 2026-04-06 07:07:47.010166 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 14.38s 2026-04-06 07:07:47.010197 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 4.78s 2026-04-06 07:07:47.010209 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.20s 2026-04-06 07:07:47.010220 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 1.93s 2026-04-06 07:07:47.010231 | orchestrator | mariadb : include_tasks ------------------------------------------------- 1.85s 2026-04-06 07:07:47.010337 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.85s 2026-04-06 07:07:47.010479 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 1.76s 2026-04-06 07:07:47.010495 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 1.31s 2026-04-06 07:07:47.199673 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-06 07:07:47.204586 | orchestrator | + set -e 2026-04-06 07:07:47.204652 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-06 07:07:47.204663 | orchestrator | ++ export INTERACTIVE=false 2026-04-06 07:07:47.204670 | orchestrator | ++ INTERACTIVE=false 2026-04-06 07:07:47.204677 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-06 07:07:47.204684 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-06 07:07:47.204692 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-06 07:07:47.205367 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-06 07:07:47.210111 | orchestrator | 2026-04-06 07:07:47.210155 | orchestrator | # OpenStack endpoints 2026-04-06 07:07:47.210165 | orchestrator | 2026-04-06 07:07:47.210172 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-06 07:07:47.210179 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-06 07:07:47.210186 | orchestrator | + export OS_CLOUD=admin 2026-04-06 07:07:47.210193 | orchestrator | + OS_CLOUD=admin 2026-04-06 07:07:47.210200 | orchestrator | + echo 2026-04-06 07:07:47.210207 | orchestrator | + echo '# OpenStack endpoints' 2026-04-06 07:07:47.210214 | orchestrator | + echo 2026-04-06 07:07:47.210220 | orchestrator | + openstack endpoint list 2026-04-06 07:07:50.285629 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-06 07:07:50.285731 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-06 07:07:50.285747 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-06 07:07:50.285759 | orchestrator | | 00c7043dd98b41e4957e8da8ac6a5bd9 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-06 07:07:50.285770 | orchestrator | | 0210447ba9f74cdb84201782bc8dbf0f | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-06 07:07:50.285798 | orchestrator | | 0de5076405154708bb4ef31cf2ae5b49 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-06 07:07:50.285810 | orchestrator | | 10b83992a88a4f86b57d29142b13d3ac | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-06 07:07:50.285820 | orchestrator | | 183527a132bf4c488295cd7a6199d8dd | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-06 07:07:50.285831 | orchestrator | | 1a58ad23aca54c2ba6a32a6a1def4e95 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-06 07:07:50.285842 | orchestrator | | 1b7381892f864f4d9f195bdf65110653 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-06 07:07:50.285853 | orchestrator | | 23b8ead015644e2e80b802cfe5c7c497 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-06 07:07:50.285863 | orchestrator | | 255f85dd8c79400c89e23619d5b60091 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-06 07:07:50.285874 | orchestrator | | 26ea840d019e489d81498deb30f9b5fd | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-06 07:07:50.285915 | orchestrator | | 315f6e63b173498c9087564076b669cb | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-06 07:07:50.285927 | orchestrator | | 39e2a961f3f442c893dab6e2dca4c84b | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-06 07:07:50.285937 | orchestrator | | 3e4b441f6f844d3f930629e0a6e1c43a | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-06 07:07:50.286002 | orchestrator | | 5515f9518124464b8802c7d5ea7d534a | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-06 07:07:50.286108 | orchestrator | | 7118b6da031e482f877364eb1ea560c9 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-06 07:07:50.286120 | orchestrator | | 7b358489a0fd41de94297d1a6de28272 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-06 07:07:50.286131 | orchestrator | | 7dedf4f3df4140cc893672197944356b | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-06 07:07:50.286144 | orchestrator | | 94e6b5c000d04f01b85d59361ca95575 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-06 07:07:50.286158 | orchestrator | | 97d284ddca284777a8a6592ef76120cd | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-06 07:07:50.286173 | orchestrator | | 98ed7244d9964eda9c5676ac3e14a1d5 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-06 07:07:50.286208 | orchestrator | | a66acc85e2e94652b0c775779e867d04 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-06 07:07:50.286223 | orchestrator | | b74613dd69504680920e53e2fecac473 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-06 07:07:50.286236 | orchestrator | | b9c8f5aa4bca4270950ead29b62b2462 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-06 07:07:50.286248 | orchestrator | | d0367dc4e42846f18ade8ec02f82357a | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-06 07:07:50.286267 | orchestrator | | d69ad87518b841ae9ab2601c819718cf | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-06 07:07:50.286293 | orchestrator | | d88e3e4cd5a14882a0092840325cdaa5 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-06 07:07:50.286312 | orchestrator | | ebd3e823599f4a55b73452cfb1a1185f | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-06 07:07:50.286331 | orchestrator | | ec958dc3331c4d5aa550324ee4dee13c | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-06 07:07:50.286350 | orchestrator | | ecbd6d624c6a457ab8e42c5da2ea2e9d | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-06 07:07:50.286370 | orchestrator | | f1578e57c413477e9b961874dcb46f94 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-06 07:07:50.286401 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-06 07:07:50.538796 | orchestrator | 2026-04-06 07:07:50.538894 | orchestrator | # Cinder 2026-04-06 07:07:50.538911 | orchestrator | 2026-04-06 07:07:50.538924 | orchestrator | + echo 2026-04-06 07:07:50.538935 | orchestrator | + echo '# Cinder' 2026-04-06 07:07:50.539009 | orchestrator | + echo 2026-04-06 07:07:50.539022 | orchestrator | + openstack volume service list 2026-04-06 07:07:53.211431 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-06 07:07:53.211537 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-06 07:07:53.211552 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-06 07:07:53.211563 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-06T07:07:51.000000 | 2026-04-06 07:07:53.211575 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-06T07:07:52.000000 | 2026-04-06 07:07:53.211586 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-06T07:07:52.000000 | 2026-04-06 07:07:53.211597 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-06T07:07:46.000000 | 2026-04-06 07:07:53.211608 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-06T07:07:48.000000 | 2026-04-06 07:07:53.211620 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-06T07:07:51.000000 | 2026-04-06 07:07:53.211631 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-06T07:07:52.000000 | 2026-04-06 07:07:53.211642 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-06T07:07:51.000000 | 2026-04-06 07:07:53.211653 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-06T07:07:45.000000 | 2026-04-06 07:07:53.211665 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-06 07:07:53.462470 | orchestrator | 2026-04-06 07:07:53.462585 | orchestrator | # Neutron 2026-04-06 07:07:53.462603 | orchestrator | 2026-04-06 07:07:53.462616 | orchestrator | + echo 2026-04-06 07:07:53.462671 | orchestrator | + echo '# Neutron' 2026-04-06 07:07:53.462689 | orchestrator | + echo 2026-04-06 07:07:53.462704 | orchestrator | + openstack network agent list 2026-04-06 07:07:56.119296 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-06 07:07:56.119421 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-06 07:07:56.119433 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-06 07:07:56.119440 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-06 07:07:56.119447 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-06 07:07:56.119453 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-06 07:07:56.119459 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-06 07:07:56.119465 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-06 07:07:56.119471 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-06 07:07:56.119498 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-06 07:07:56.119515 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-06 07:07:56.119522 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-06 07:07:56.119528 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-06 07:07:56.396929 | orchestrator | + openstack network service provider list 2026-04-06 07:07:58.938994 | orchestrator | +---------------+------+---------+ 2026-04-06 07:07:58.939105 | orchestrator | | Service Type | Name | Default | 2026-04-06 07:07:58.939122 | orchestrator | +---------------+------+---------+ 2026-04-06 07:07:58.939134 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-06 07:07:58.939145 | orchestrator | +---------------+------+---------+ 2026-04-06 07:07:59.190464 | orchestrator | 2026-04-06 07:07:59.190565 | orchestrator | # Nova 2026-04-06 07:07:59.190580 | orchestrator | 2026-04-06 07:07:59.190591 | orchestrator | + echo 2026-04-06 07:07:59.190603 | orchestrator | + echo '# Nova' 2026-04-06 07:07:59.190614 | orchestrator | + echo 2026-04-06 07:07:59.190626 | orchestrator | + openstack compute service list 2026-04-06 07:08:01.925030 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-06 07:08:01.925126 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-06 07:08:01.925140 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-06 07:08:01.925151 | orchestrator | | a657bfe1-3fd1-47e4-bce0-32ec1c211dcf | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-06T07:07:53.000000 | 2026-04-06 07:08:01.925160 | orchestrator | | 232d3831-9f87-4f64-8ff9-0c6d96091b27 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-06T07:07:52.000000 | 2026-04-06 07:08:01.925170 | orchestrator | | ee8f610d-26d6-4eee-8468-60b19edf4d4f | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-06T07:07:52.000000 | 2026-04-06 07:08:01.925180 | orchestrator | | 59cd50b5-e990-4047-a03b-eb3483aba8f2 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-06T07:08:00.000000 | 2026-04-06 07:08:01.925189 | orchestrator | | 0b2c0489-0f27-4e2c-90b3-304234806bc1 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-06T07:07:54.000000 | 2026-04-06 07:08:01.925199 | orchestrator | | f2b671a6-23d3-4b65-96ca-30de89a29ddc | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-06T07:07:56.000000 | 2026-04-06 07:08:01.925208 | orchestrator | | d70bd592-fe8c-46ed-9b60-339b84eed0f2 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-06T07:07:56.000000 | 2026-04-06 07:08:01.925218 | orchestrator | | b3c36238-2c97-4ec1-88de-edbfde4400ea | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-06T07:07:55.000000 | 2026-04-06 07:08:01.925227 | orchestrator | | b4c648b1-f12b-41f7-8592-2a14ad52fc46 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-06T07:07:56.000000 | 2026-04-06 07:08:01.925237 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-06 07:08:02.193480 | orchestrator | + openstack hypervisor list 2026-04-06 07:08:04.700100 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-06 07:08:04.700225 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-06 07:08:04.700241 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-06 07:08:04.700253 | orchestrator | | 17c0be7d-16bc-4a51-b0e4-4d4f5bc1c7bb | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-06 07:08:04.700290 | orchestrator | | 383f23fc-0ca6-42fb-8b66-be3e2ce2de68 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-06 07:08:04.700302 | orchestrator | | b8e90d29-893f-4ddc-89c5-12fe53efef7d | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-06 07:08:04.700313 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-06 07:08:04.948010 | orchestrator | 2026-04-06 07:08:04.948104 | orchestrator | # Run OpenStack test play 2026-04-06 07:08:04.948119 | orchestrator | 2026-04-06 07:08:04.948130 | orchestrator | + echo 2026-04-06 07:08:04.948141 | orchestrator | + echo '# Run OpenStack test play' 2026-04-06 07:08:04.948152 | orchestrator | + echo 2026-04-06 07:08:04.948162 | orchestrator | + osism apply --environment openstack test 2026-04-06 07:08:06.271819 | orchestrator | 2026-04-06 07:08:06 | INFO  | Trying to run play test in environment openstack 2026-04-06 07:08:16.302222 | orchestrator | 2026-04-06 07:08:16 | INFO  | Prepare task for execution of test. 2026-04-06 07:08:16.397091 | orchestrator | 2026-04-06 07:08:16 | INFO  | Task 9ad8bba2-79b6-4c7e-b08a-e774f3bd13e3 (test) was prepared for execution. 2026-04-06 07:08:16.397194 | orchestrator | 2026-04-06 07:08:16 | INFO  | It takes a moment until task 9ad8bba2-79b6-4c7e-b08a-e774f3bd13e3 (test) has been started and output is visible here. 2026-04-06 07:10:52.772865 | orchestrator | 2026-04-06 07:10:52.773081 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-06 07:10:52.773102 | orchestrator | 2026-04-06 07:10:52.773115 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-06 07:10:52.773126 | orchestrator | Monday 06 April 2026 07:08:21 +0000 (0:00:01.358) 0:00:01.358 ********** 2026-04-06 07:10:52.773138 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.773150 | orchestrator | 2026-04-06 07:10:52.773177 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-06 07:10:52.773189 | orchestrator | Monday 06 April 2026 07:08:27 +0000 (0:00:05.986) 0:00:07.345 ********** 2026-04-06 07:10:52.773200 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.773211 | orchestrator | 2026-04-06 07:10:52.773222 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-06 07:10:52.773233 | orchestrator | Monday 06 April 2026 07:08:32 +0000 (0:00:04.987) 0:00:12.333 ********** 2026-04-06 07:10:52.773244 | orchestrator | changed: [localhost] 2026-04-06 07:10:52.773259 | orchestrator | 2026-04-06 07:10:52.773278 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-06 07:10:52.773298 | orchestrator | Monday 06 April 2026 07:08:41 +0000 (0:00:09.384) 0:00:21.717 ********** 2026-04-06 07:10:52.773318 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.773337 | orchestrator | 2026-04-06 07:10:52.773353 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-06 07:10:52.773371 | orchestrator | Monday 06 April 2026 07:08:46 +0000 (0:00:05.045) 0:00:26.762 ********** 2026-04-06 07:10:52.773390 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.773408 | orchestrator | 2026-04-06 07:10:52.773427 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-06 07:10:52.773447 | orchestrator | Monday 06 April 2026 07:08:51 +0000 (0:00:05.166) 0:00:31.929 ********** 2026-04-06 07:10:52.773467 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-06 07:10:52.773487 | orchestrator | ok: [localhost] => (item=member) 2026-04-06 07:10:52.773508 | orchestrator | changed: [localhost] => (item=creator) 2026-04-06 07:10:52.773529 | orchestrator | 2026-04-06 07:10:52.773549 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-06 07:10:52.773569 | orchestrator | Monday 06 April 2026 07:09:05 +0000 (0:00:13.349) 0:00:45.279 ********** 2026-04-06 07:10:52.773590 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.773609 | orchestrator | 2026-04-06 07:10:52.773630 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-06 07:10:52.773679 | orchestrator | Monday 06 April 2026 07:09:11 +0000 (0:00:05.924) 0:00:51.204 ********** 2026-04-06 07:10:52.773700 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.773719 | orchestrator | 2026-04-06 07:10:52.773731 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-06 07:10:52.773741 | orchestrator | Monday 06 April 2026 07:09:16 +0000 (0:00:05.051) 0:00:56.255 ********** 2026-04-06 07:10:52.773752 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.773763 | orchestrator | 2026-04-06 07:10:52.773774 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-06 07:10:52.773785 | orchestrator | Monday 06 April 2026 07:09:21 +0000 (0:00:05.260) 0:01:01.516 ********** 2026-04-06 07:10:52.773796 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.773807 | orchestrator | 2026-04-06 07:10:52.773818 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-06 07:10:52.773829 | orchestrator | Monday 06 April 2026 07:09:26 +0000 (0:00:04.819) 0:01:06.335 ********** 2026-04-06 07:10:52.773839 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.773850 | orchestrator | 2026-04-06 07:10:52.773861 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-06 07:10:52.773872 | orchestrator | Monday 06 April 2026 07:09:31 +0000 (0:00:04.994) 0:01:11.329 ********** 2026-04-06 07:10:52.773882 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.773893 | orchestrator | 2026-04-06 07:10:52.773904 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-06 07:10:52.773914 | orchestrator | Monday 06 April 2026 07:09:36 +0000 (0:00:04.933) 0:01:16.263 ********** 2026-04-06 07:10:52.773925 | orchestrator | ok: [localhost] => (item={'name': 'test-1'}) 2026-04-06 07:10:52.773967 | orchestrator | ok: [localhost] => (item={'name': 'test-2'}) 2026-04-06 07:10:52.773979 | orchestrator | ok: [localhost] => (item={'name': 'test-3'}) 2026-04-06 07:10:52.773990 | orchestrator | 2026-04-06 07:10:52.774001 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-06 07:10:52.774067 | orchestrator | Monday 06 April 2026 07:09:48 +0000 (0:00:12.373) 0:01:28.636 ********** 2026-04-06 07:10:52.774080 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-06 07:10:52.774093 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-06 07:10:52.774103 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-06 07:10:52.774114 | orchestrator | 2026-04-06 07:10:52.774125 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-06 07:10:52.774147 | orchestrator | Monday 06 April 2026 07:10:01 +0000 (0:00:12.868) 0:01:41.505 ********** 2026-04-06 07:10:52.774157 | orchestrator | ok: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-06 07:10:52.774169 | orchestrator | ok: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-06 07:10:52.774179 | orchestrator | ok: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-06 07:10:52.774190 | orchestrator | 2026-04-06 07:10:52.774201 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-06 07:10:52.774212 | orchestrator | 2026-04-06 07:10:52.774223 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-06 07:10:52.774233 | orchestrator | Monday 06 April 2026 07:10:15 +0000 (0:00:14.221) 0:01:55.727 ********** 2026-04-06 07:10:52.774245 | orchestrator | ok: [localhost] 2026-04-06 07:10:52.774256 | orchestrator | 2026-04-06 07:10:52.774287 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-06 07:10:52.774298 | orchestrator | Monday 06 April 2026 07:10:20 +0000 (0:00:04.931) 0:02:00.659 ********** 2026-04-06 07:10:52.774309 | orchestrator | skipping: [localhost] 2026-04-06 07:10:52.774320 | orchestrator | 2026-04-06 07:10:52.774331 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-06 07:10:52.774342 | orchestrator | Monday 06 April 2026 07:10:21 +0000 (0:00:01.093) 0:02:01.752 ********** 2026-04-06 07:10:52.774367 | orchestrator | skipping: [localhost] 2026-04-06 07:10:52.774392 | orchestrator | 2026-04-06 07:10:52.774418 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-06 07:10:52.774434 | orchestrator | Monday 06 April 2026 07:10:22 +0000 (0:00:01.116) 0:02:02.868 ********** 2026-04-06 07:10:52.774452 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-06 07:10:52.774469 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-06 07:10:52.774486 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-06 07:10:52.774503 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-06 07:10:52.774520 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-06 07:10:52.774538 | orchestrator | skipping: [localhost] 2026-04-06 07:10:52.774556 | orchestrator | 2026-04-06 07:10:52.774572 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-06 07:10:52.774589 | orchestrator | Monday 06 April 2026 07:10:24 +0000 (0:00:01.302) 0:02:04.171 ********** 2026-04-06 07:10:52.774607 | orchestrator | skipping: [localhost] 2026-04-06 07:10:52.774626 | orchestrator | 2026-04-06 07:10:52.774644 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-06 07:10:52.774662 | orchestrator | Monday 06 April 2026 07:10:25 +0000 (0:00:01.215) 0:02:05.387 ********** 2026-04-06 07:10:52.774676 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-06 07:10:52.774687 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-06 07:10:52.774697 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-06 07:10:52.774708 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-06 07:10:52.774719 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-06 07:10:52.774730 | orchestrator | 2026-04-06 07:10:52.774740 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-06 07:10:52.774751 | orchestrator | Monday 06 April 2026 07:10:31 +0000 (0:00:05.869) 0:02:11.257 ********** 2026-04-06 07:10:52.774762 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-06 07:10:52.774775 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j454789909243.3699', 'results_file': '/ansible/.ansible_async/j454789909243.3699', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:10:52.774799 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j993916758323.3730', 'results_file': '/ansible/.ansible_async/j993916758323.3730', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:10:52.774811 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j566579474350.3756', 'results_file': '/ansible/.ansible_async/j566579474350.3756', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:10:52.774822 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j818572436230.3781', 'results_file': '/ansible/.ansible_async/j818572436230.3781', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:10:52.774833 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j43351672507.3806', 'results_file': '/ansible/.ansible_async/j43351672507.3806', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:10:52.774844 | orchestrator | 2026-04-06 07:10:52.774864 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-06 07:10:52.774875 | orchestrator | Monday 06 April 2026 07:10:47 +0000 (0:00:16.021) 0:02:27.279 ********** 2026-04-06 07:10:52.774886 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-06 07:10:52.775005 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-06 07:10:52.775020 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-06 07:10:52.775031 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-06 07:10:52.775042 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-06 07:10:52.775053 | orchestrator | 2026-04-06 07:10:52.775064 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-06 07:10:52.775086 | orchestrator | Monday 06 April 2026 07:10:52 +0000 (0:00:05.610) 0:02:32.890 ********** 2026-04-06 07:11:53.276502 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j923240674600.3877', 'results_file': '/ansible/.ansible_async/j923240674600.3877', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:11:53.276660 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j293446492049.3902', 'results_file': '/ansible/.ansible_async/j293446492049.3902', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:11:53.276679 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j394509272217.3927', 'results_file': '/ansible/.ansible_async/j394509272217.3927', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:11:53.276692 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j645808750094.3952', 'results_file': '/ansible/.ansible_async/j645808750094.3952', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:11:53.276703 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j271846593793.3977', 'results_file': '/ansible/.ansible_async/j271846593793.3977', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:11:53.276715 | orchestrator | 2026-04-06 07:11:53.276728 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-06 07:11:53.276740 | orchestrator | Monday 06 April 2026 07:10:58 +0000 (0:00:05.341) 0:02:38.232 ********** 2026-04-06 07:11:53.276751 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-06 07:11:53.276762 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-06 07:11:53.276773 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-06 07:11:53.276784 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-06 07:11:53.276794 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-06 07:11:53.276805 | orchestrator | 2026-04-06 07:11:53.276816 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-06 07:11:53.276827 | orchestrator | Monday 06 April 2026 07:11:03 +0000 (0:00:05.764) 0:02:43.996 ********** 2026-04-06 07:11:53.276838 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-06 07:11:53.276850 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j590166541657.4048', 'results_file': '/ansible/.ansible_async/j590166541657.4048', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:11:53.276861 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j377248104603.4073', 'results_file': '/ansible/.ansible_async/j377248104603.4073', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:11:53.276892 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j840907786671.4099', 'results_file': '/ansible/.ansible_async/j840907786671.4099', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:11:53.276904 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j607378082371.4125', 'results_file': '/ansible/.ansible_async/j607378082371.4125', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:11:53.276915 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j432056826670.4151', 'results_file': '/ansible/.ansible_async/j432056826670.4151', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-06 07:11:53.276956 | orchestrator | 2026-04-06 07:11:53.276971 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-06 07:11:53.276983 | orchestrator | Monday 06 April 2026 07:11:14 +0000 (0:00:10.966) 0:02:54.962 ********** 2026-04-06 07:11:53.276997 | orchestrator | ok: [localhost] 2026-04-06 07:11:53.277011 | orchestrator | 2026-04-06 07:11:53.277024 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-06 07:11:53.277036 | orchestrator | Monday 06 April 2026 07:11:19 +0000 (0:00:05.051) 0:03:00.013 ********** 2026-04-06 07:11:53.277048 | orchestrator | ok: [localhost] 2026-04-06 07:11:53.277060 | orchestrator | 2026-04-06 07:11:53.277073 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-06 07:11:53.277103 | orchestrator | Monday 06 April 2026 07:11:25 +0000 (0:00:06.032) 0:03:06.046 ********** 2026-04-06 07:11:53.277117 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-06 07:11:53.277131 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-06 07:11:53.277150 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-06 07:11:53.277163 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-06 07:11:53.277176 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-06 07:11:53.277189 | orchestrator | 2026-04-06 07:11:53.277202 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-06 07:11:53.277215 | orchestrator | Monday 06 April 2026 07:11:51 +0000 (0:00:25.456) 0:03:31.503 ********** 2026-04-06 07:11:53.277228 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-06 07:11:53.277241 | orchestrator |  "msg": "test: 192.168.112.198" 2026-04-06 07:11:53.277254 | orchestrator | } 2026-04-06 07:11:53.277267 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-06 07:11:53.277280 | orchestrator |  "msg": "test-1: 192.168.112.104" 2026-04-06 07:11:53.277290 | orchestrator | } 2026-04-06 07:11:53.277301 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-06 07:11:53.277312 | orchestrator |  "msg": "test-2: 192.168.112.166" 2026-04-06 07:11:53.277322 | orchestrator | } 2026-04-06 07:11:53.277333 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-06 07:11:53.277344 | orchestrator |  "msg": "test-3: 192.168.112.192" 2026-04-06 07:11:53.277354 | orchestrator | } 2026-04-06 07:11:53.277365 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-06 07:11:53.277376 | orchestrator |  "msg": "test-4: 192.168.112.199" 2026-04-06 07:11:53.277386 | orchestrator | } 2026-04-06 07:11:53.277397 | orchestrator | 2026-04-06 07:11:53.277408 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-06 07:11:53.277420 | orchestrator | localhost : ok=26  changed=8  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-06 07:11:53.277440 | orchestrator | 2026-04-06 07:11:53.277451 | orchestrator | 2026-04-06 07:11:53.277461 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-06 07:11:53.277472 | orchestrator | Monday 06 April 2026 07:11:52 +0000 (0:00:01.567) 0:03:33.070 ********** 2026-04-06 07:11:53.277483 | orchestrator | =============================================================================== 2026-04-06 07:11:53.277494 | orchestrator | Create floating ip addresses ------------------------------------------- 25.46s 2026-04-06 07:11:53.277504 | orchestrator | Wait for instance creation to complete --------------------------------- 16.02s 2026-04-06 07:11:53.277515 | orchestrator | Create test routers ---------------------------------------------------- 14.22s 2026-04-06 07:11:53.277526 | orchestrator | Add member roles to user test ------------------------------------------ 13.35s 2026-04-06 07:11:53.277536 | orchestrator | Create test subnets ---------------------------------------------------- 12.87s 2026-04-06 07:11:53.277547 | orchestrator | Create test networks --------------------------------------------------- 12.37s 2026-04-06 07:11:53.277558 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.97s 2026-04-06 07:11:53.277568 | orchestrator | Add manager role to user test-admin ------------------------------------- 9.39s 2026-04-06 07:11:53.277579 | orchestrator | Attach test volume ------------------------------------------------------ 6.03s 2026-04-06 07:11:53.277590 | orchestrator | Create test domain ------------------------------------------------------ 5.99s 2026-04-06 07:11:53.277600 | orchestrator | Create test server group ------------------------------------------------ 5.93s 2026-04-06 07:11:53.277611 | orchestrator | Create test instances --------------------------------------------------- 5.87s 2026-04-06 07:11:53.277621 | orchestrator | Add tag to instances ---------------------------------------------------- 5.76s 2026-04-06 07:11:53.277632 | orchestrator | Add metadata to instances ----------------------------------------------- 5.61s 2026-04-06 07:11:53.277643 | orchestrator | Wait for metadata to be added ------------------------------------------- 5.34s 2026-04-06 07:11:53.277653 | orchestrator | Add rule to ssh security group ------------------------------------------ 5.26s 2026-04-06 07:11:53.277664 | orchestrator | Create test user -------------------------------------------------------- 5.17s 2026-04-06 07:11:53.277674 | orchestrator | Create test volume ------------------------------------------------------ 5.05s 2026-04-06 07:11:53.277685 | orchestrator | Create ssh security group ----------------------------------------------- 5.05s 2026-04-06 07:11:53.277696 | orchestrator | Create test project ----------------------------------------------------- 5.04s 2026-04-06 07:11:53.476116 | orchestrator | + server_list 2026-04-06 07:11:53.476211 | orchestrator | + openstack --os-cloud test server list 2026-04-06 07:11:57.322487 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-06 07:11:57.322558 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-06 07:11:57.322566 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-06 07:11:57.322571 | orchestrator | | 1ffb2222-76b0-4c33-9cb9-cccac50cb77d | test-4 | ACTIVE | test-3=192.168.112.199, 192.168.202.235 | N/A (booted from volume) | SCS-1L-1 | 2026-04-06 07:11:57.322576 | orchestrator | | 0b831986-3c20-42b1-9723-3d5a676521b6 | test-3 | ACTIVE | test-2=192.168.112.192, 192.168.201.192 | N/A (booted from volume) | SCS-1L-1 | 2026-04-06 07:11:57.322581 | orchestrator | | 0f709c30-1ea2-4513-8aef-595d77244c64 | test-1 | ACTIVE | test-1=192.168.112.104, 192.168.200.92 | N/A (booted from volume) | SCS-1L-1 | 2026-04-06 07:11:57.322586 | orchestrator | | 75b3ae9e-45d4-4502-9b59-90c56e5692aa | test-2 | ACTIVE | test-2=192.168.112.166, 192.168.201.198 | N/A (booted from volume) | SCS-1L-1 | 2026-04-06 07:11:57.322603 | orchestrator | | 796def20-ed7f-4340-916d-2b9955f332ee | test | ACTIVE | test-1=192.168.112.198, 192.168.200.201 | N/A (booted from volume) | SCS-1L-1 | 2026-04-06 07:11:57.322609 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-06 07:11:57.594241 | orchestrator | + openstack --os-cloud test server show test 2026-04-06 07:12:00.917532 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:00.917679 | orchestrator | | Field | Value | 2026-04-06 07:12:00.917709 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:00.917730 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-06 07:12:00.917751 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-06 07:12:00.917771 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-06 07:12:00.917791 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-06 07:12:00.917811 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-06 07:12:00.917831 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-06 07:12:00.917897 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-06 07:12:00.917911 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-06 07:12:00.917922 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-06 07:12:00.917968 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-06 07:12:00.917979 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-06 07:12:00.917990 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-06 07:12:00.918001 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-06 07:12:00.918013 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-06 07:12:00.918073 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-06 07:12:00.918094 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-06T04:15:56.000000 | 2026-04-06 07:12:00.918121 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-06 07:12:00.918133 | orchestrator | | accessIPv4 | | 2026-04-06 07:12:00.918155 | orchestrator | | accessIPv6 | | 2026-04-06 07:12:00.918167 | orchestrator | | addresses | test-1=192.168.112.198, 192.168.200.201 | 2026-04-06 07:12:00.918178 | orchestrator | | config_drive | | 2026-04-06 07:12:00.918189 | orchestrator | | created | 2026-04-06T04:15:29Z | 2026-04-06 07:12:00.918200 | orchestrator | | description | None | 2026-04-06 07:12:00.918212 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-06 07:12:00.918223 | orchestrator | | hostId | 21957355140bcc184b1c12f902990c51adb9a73ef43b89b8497ef939 | 2026-04-06 07:12:00.918241 | orchestrator | | host_status | None | 2026-04-06 07:12:00.918265 | orchestrator | | id | 796def20-ed7f-4340-916d-2b9955f332ee | 2026-04-06 07:12:00.918277 | orchestrator | | image | N/A (booted from volume) | 2026-04-06 07:12:00.918289 | orchestrator | | key_name | test | 2026-04-06 07:12:00.918300 | orchestrator | | locked | False | 2026-04-06 07:12:00.918311 | orchestrator | | locked_reason | None | 2026-04-06 07:12:00.918322 | orchestrator | | name | test | 2026-04-06 07:12:00.918333 | orchestrator | | pinned_availability_zone | None | 2026-04-06 07:12:00.918344 | orchestrator | | progress | 0 | 2026-04-06 07:12:00.918361 | orchestrator | | project_id | b933bc95b8d74bedbf85f7b32e53eaa4 | 2026-04-06 07:12:00.918372 | orchestrator | | properties | hostname='test' | 2026-04-06 07:12:00.918391 | orchestrator | | security_groups | name='ssh' | 2026-04-06 07:12:00.918878 | orchestrator | | | name='icmp' | 2026-04-06 07:12:00.918904 | orchestrator | | server_groups | None | 2026-04-06 07:12:00.918951 | orchestrator | | status | ACTIVE | 2026-04-06 07:12:00.918973 | orchestrator | | tags | test | 2026-04-06 07:12:00.918992 | orchestrator | | trusted_image_certificates | None | 2026-04-06 07:12:00.919012 | orchestrator | | updated | 2026-04-06T07:10:53Z | 2026-04-06 07:12:00.919053 | orchestrator | | user_id | 4176df0ad4d04ae4ba2deebeae721468 | 2026-04-06 07:12:00.919071 | orchestrator | | volumes_attached | delete_on_termination='True', id='55ed4f5b-9558-4efd-891b-02394bcf9221' | 2026-04-06 07:12:00.919082 | orchestrator | | | delete_on_termination='False', id='9703a467-a300-447e-a3d7-87eaf6ab1d29' | 2026-04-06 07:12:00.921865 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:01.181426 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-06 07:12:04.185593 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:04.185703 | orchestrator | | Field | Value | 2026-04-06 07:12:04.185723 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:04.185739 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-06 07:12:04.185753 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-06 07:12:04.185801 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-06 07:12:04.185816 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-06 07:12:04.185830 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-06 07:12:04.185843 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-06 07:12:04.185876 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-06 07:12:04.185891 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-06 07:12:04.185904 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-06 07:12:04.185917 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-06 07:12:04.185953 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-06 07:12:04.185966 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-06 07:12:04.185997 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-06 07:12:04.186012 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-06 07:12:04.186098 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-06 07:12:04.186115 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-06T04:15:56.000000 | 2026-04-06 07:12:04.186145 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-06 07:12:04.186162 | orchestrator | | accessIPv4 | | 2026-04-06 07:12:04.186177 | orchestrator | | accessIPv6 | | 2026-04-06 07:12:04.186192 | orchestrator | | addresses | test-1=192.168.112.104, 192.168.200.92 | 2026-04-06 07:12:04.186209 | orchestrator | | config_drive | | 2026-04-06 07:12:04.186240 | orchestrator | | created | 2026-04-06T04:15:31Z | 2026-04-06 07:12:04.186261 | orchestrator | | description | None | 2026-04-06 07:12:04.186275 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-06 07:12:04.186288 | orchestrator | | hostId | 21957355140bcc184b1c12f902990c51adb9a73ef43b89b8497ef939 | 2026-04-06 07:12:04.186302 | orchestrator | | host_status | None | 2026-04-06 07:12:04.186326 | orchestrator | | id | 0f709c30-1ea2-4513-8aef-595d77244c64 | 2026-04-06 07:12:04.186341 | orchestrator | | image | N/A (booted from volume) | 2026-04-06 07:12:04.186355 | orchestrator | | key_name | test | 2026-04-06 07:12:04.186369 | orchestrator | | locked | False | 2026-04-06 07:12:04.186391 | orchestrator | | locked_reason | None | 2026-04-06 07:12:04.186405 | orchestrator | | name | test-1 | 2026-04-06 07:12:04.186424 | orchestrator | | pinned_availability_zone | None | 2026-04-06 07:12:04.186438 | orchestrator | | progress | 0 | 2026-04-06 07:12:04.186451 | orchestrator | | project_id | b933bc95b8d74bedbf85f7b32e53eaa4 | 2026-04-06 07:12:04.186464 | orchestrator | | properties | hostname='test-1' | 2026-04-06 07:12:04.186487 | orchestrator | | security_groups | name='ssh' | 2026-04-06 07:12:04.186502 | orchestrator | | | name='icmp' | 2026-04-06 07:12:04.186515 | orchestrator | | server_groups | None | 2026-04-06 07:12:04.186535 | orchestrator | | status | ACTIVE | 2026-04-06 07:12:04.186548 | orchestrator | | tags | test | 2026-04-06 07:12:04.186560 | orchestrator | | trusted_image_certificates | None | 2026-04-06 07:12:04.186579 | orchestrator | | updated | 2026-04-06T07:10:54Z | 2026-04-06 07:12:04.186593 | orchestrator | | user_id | 4176df0ad4d04ae4ba2deebeae721468 | 2026-04-06 07:12:04.186608 | orchestrator | | volumes_attached | delete_on_termination='True', id='012b0a6e-bcf6-4309-9489-869d9b9c8452' | 2026-04-06 07:12:04.189755 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:04.460599 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-06 07:12:07.489272 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:07.489454 | orchestrator | | Field | Value | 2026-04-06 07:12:07.489501 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:07.489514 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-06 07:12:07.489526 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-06 07:12:07.489538 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-06 07:12:07.489549 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-06 07:12:07.489560 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-06 07:12:07.489571 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-06 07:12:07.489600 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-06 07:12:07.489612 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-06 07:12:07.489623 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-06 07:12:07.489642 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-06 07:12:07.489653 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-06 07:12:07.489664 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-06 07:12:07.489753 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-06 07:12:07.489780 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-06 07:12:07.489795 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-06 07:12:07.489808 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-06T04:15:57.000000 | 2026-04-06 07:12:07.489831 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-06 07:12:07.489845 | orchestrator | | accessIPv4 | | 2026-04-06 07:12:07.489866 | orchestrator | | accessIPv6 | | 2026-04-06 07:12:07.489879 | orchestrator | | addresses | test-2=192.168.112.166, 192.168.201.198 | 2026-04-06 07:12:07.489893 | orchestrator | | config_drive | | 2026-04-06 07:12:07.489908 | orchestrator | | created | 2026-04-06T04:15:31Z | 2026-04-06 07:12:07.489921 | orchestrator | | description | None | 2026-04-06 07:12:07.489962 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-06 07:12:07.489976 | orchestrator | | hostId | b301908c463c5194244abbaa337388541f1ff3778f7515d701b9fb7e | 2026-04-06 07:12:07.489989 | orchestrator | | host_status | None | 2026-04-06 07:12:07.490010 | orchestrator | | id | 75b3ae9e-45d4-4502-9b59-90c56e5692aa | 2026-04-06 07:12:07.490117 | orchestrator | | image | N/A (booted from volume) | 2026-04-06 07:12:07.490132 | orchestrator | | key_name | test | 2026-04-06 07:12:07.490144 | orchestrator | | locked | False | 2026-04-06 07:12:07.490155 | orchestrator | | locked_reason | None | 2026-04-06 07:12:07.490166 | orchestrator | | name | test-2 | 2026-04-06 07:12:07.490177 | orchestrator | | pinned_availability_zone | None | 2026-04-06 07:12:07.490193 | orchestrator | | progress | 0 | 2026-04-06 07:12:07.490205 | orchestrator | | project_id | b933bc95b8d74bedbf85f7b32e53eaa4 | 2026-04-06 07:12:07.490216 | orchestrator | | properties | hostname='test-2' | 2026-04-06 07:12:07.490243 | orchestrator | | security_groups | name='ssh' | 2026-04-06 07:12:07.490255 | orchestrator | | | name='icmp' | 2026-04-06 07:12:07.490266 | orchestrator | | server_groups | None | 2026-04-06 07:12:07.490277 | orchestrator | | status | ACTIVE | 2026-04-06 07:12:07.490288 | orchestrator | | tags | test | 2026-04-06 07:12:07.490299 | orchestrator | | trusted_image_certificates | None | 2026-04-06 07:12:07.490315 | orchestrator | | updated | 2026-04-06T07:10:54Z | 2026-04-06 07:12:07.490327 | orchestrator | | user_id | 4176df0ad4d04ae4ba2deebeae721468 | 2026-04-06 07:12:07.490338 | orchestrator | | volumes_attached | delete_on_termination='True', id='483c19c8-8cd9-4655-b886-899f437c0c74' | 2026-04-06 07:12:07.494507 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:07.759380 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-06 07:12:10.726833 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:10.726994 | orchestrator | | Field | Value | 2026-04-06 07:12:10.727012 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:10.727024 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-06 07:12:10.727036 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-06 07:12:10.727047 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-06 07:12:10.727115 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-06 07:12:10.727130 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-06 07:12:10.727142 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-06 07:12:10.727229 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-06 07:12:10.727243 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-06 07:12:10.727255 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-06 07:12:10.727266 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-06 07:12:10.727277 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-06 07:12:10.727288 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-06 07:12:10.727299 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-06 07:12:10.727318 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-06 07:12:10.727332 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-06 07:12:10.727353 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-06T04:15:57.000000 | 2026-04-06 07:12:10.727375 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-06 07:12:10.727389 | orchestrator | | accessIPv4 | | 2026-04-06 07:12:10.727402 | orchestrator | | accessIPv6 | | 2026-04-06 07:12:10.727414 | orchestrator | | addresses | test-2=192.168.112.192, 192.168.201.192 | 2026-04-06 07:12:10.727427 | orchestrator | | config_drive | | 2026-04-06 07:12:10.727440 | orchestrator | | created | 2026-04-06T04:15:32Z | 2026-04-06 07:12:10.727453 | orchestrator | | description | None | 2026-04-06 07:12:10.727470 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-06 07:12:10.727505 | orchestrator | | hostId | b301908c463c5194244abbaa337388541f1ff3778f7515d701b9fb7e | 2026-04-06 07:12:10.727518 | orchestrator | | host_status | None | 2026-04-06 07:12:10.727538 | orchestrator | | id | 0b831986-3c20-42b1-9723-3d5a676521b6 | 2026-04-06 07:12:10.727552 | orchestrator | | image | N/A (booted from volume) | 2026-04-06 07:12:10.727565 | orchestrator | | key_name | test | 2026-04-06 07:12:10.727579 | orchestrator | | locked | False | 2026-04-06 07:12:10.727592 | orchestrator | | locked_reason | None | 2026-04-06 07:12:10.727606 | orchestrator | | name | test-3 | 2026-04-06 07:12:10.727620 | orchestrator | | pinned_availability_zone | None | 2026-04-06 07:12:10.727658 | orchestrator | | progress | 0 | 2026-04-06 07:12:10.727672 | orchestrator | | project_id | b933bc95b8d74bedbf85f7b32e53eaa4 | 2026-04-06 07:12:10.727684 | orchestrator | | properties | hostname='test-3' | 2026-04-06 07:12:10.727703 | orchestrator | | security_groups | name='ssh' | 2026-04-06 07:12:10.727714 | orchestrator | | | name='icmp' | 2026-04-06 07:12:10.727726 | orchestrator | | server_groups | None | 2026-04-06 07:12:10.727737 | orchestrator | | status | ACTIVE | 2026-04-06 07:12:10.727748 | orchestrator | | tags | test | 2026-04-06 07:12:10.727759 | orchestrator | | trusted_image_certificates | None | 2026-04-06 07:12:10.727770 | orchestrator | | updated | 2026-04-06T07:10:55Z | 2026-04-06 07:12:10.727788 | orchestrator | | user_id | 4176df0ad4d04ae4ba2deebeae721468 | 2026-04-06 07:12:10.727799 | orchestrator | | volumes_attached | delete_on_termination='True', id='bcc7956e-4fb8-4f0b-b006-e411b174ebdc' | 2026-04-06 07:12:10.731051 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:11.003290 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-06 07:12:13.869880 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:13.870010 | orchestrator | | Field | Value | 2026-04-06 07:12:13.870094 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:13.870105 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-06 07:12:13.870114 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-06 07:12:13.870123 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-06 07:12:13.870152 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-06 07:12:13.870167 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-06 07:12:13.870176 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-06 07:12:13.870203 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-06 07:12:13.870213 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-06 07:12:13.870222 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-06 07:12:13.870230 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-06 07:12:13.870239 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-06 07:12:13.870248 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-06 07:12:13.870263 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-06 07:12:13.870276 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-06 07:12:13.870285 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-06 07:12:13.870294 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-06T04:15:57.000000 | 2026-04-06 07:12:13.870310 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-06 07:12:13.870319 | orchestrator | | accessIPv4 | | 2026-04-06 07:12:13.870328 | orchestrator | | accessIPv6 | | 2026-04-06 07:12:13.870337 | orchestrator | | addresses | test-3=192.168.112.199, 192.168.202.235 | 2026-04-06 07:12:13.870345 | orchestrator | | config_drive | | 2026-04-06 07:12:13.870366 | orchestrator | | created | 2026-04-06T04:15:33Z | 2026-04-06 07:12:13.870374 | orchestrator | | description | None | 2026-04-06 07:12:13.870388 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-06 07:12:13.870397 | orchestrator | | hostId | b301908c463c5194244abbaa337388541f1ff3778f7515d701b9fb7e | 2026-04-06 07:12:13.870405 | orchestrator | | host_status | None | 2026-04-06 07:12:13.870420 | orchestrator | | id | 1ffb2222-76b0-4c33-9cb9-cccac50cb77d | 2026-04-06 07:12:13.870429 | orchestrator | | image | N/A (booted from volume) | 2026-04-06 07:12:13.870438 | orchestrator | | key_name | test | 2026-04-06 07:12:13.870447 | orchestrator | | locked | False | 2026-04-06 07:12:13.870461 | orchestrator | | locked_reason | None | 2026-04-06 07:12:13.870470 | orchestrator | | name | test-4 | 2026-04-06 07:12:13.870479 | orchestrator | | pinned_availability_zone | None | 2026-04-06 07:12:13.870492 | orchestrator | | progress | 0 | 2026-04-06 07:12:13.870501 | orchestrator | | project_id | b933bc95b8d74bedbf85f7b32e53eaa4 | 2026-04-06 07:12:13.870510 | orchestrator | | properties | hostname='test-4' | 2026-04-06 07:12:13.870524 | orchestrator | | security_groups | name='ssh' | 2026-04-06 07:12:13.870533 | orchestrator | | | name='icmp' | 2026-04-06 07:12:13.870543 | orchestrator | | server_groups | None | 2026-04-06 07:12:13.870552 | orchestrator | | status | ACTIVE | 2026-04-06 07:12:13.870566 | orchestrator | | tags | test | 2026-04-06 07:12:13.870574 | orchestrator | | trusted_image_certificates | None | 2026-04-06 07:12:13.870583 | orchestrator | | updated | 2026-04-06T07:10:56Z | 2026-04-06 07:12:13.870596 | orchestrator | | user_id | 4176df0ad4d04ae4ba2deebeae721468 | 2026-04-06 07:12:13.870605 | orchestrator | | volumes_attached | delete_on_termination='True', id='5b097108-b650-4096-962a-9dbc133776b1' | 2026-04-06 07:12:13.874491 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-06 07:12:14.138133 | orchestrator | + server_ping 2026-04-06 07:12:14.139852 | orchestrator | ++ tr -d '\r' 2026-04-06 07:12:14.139906 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-06 07:12:16.887430 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-06 07:12:16.887553 | orchestrator | + ping -c3 192.168.112.192 2026-04-06 07:12:16.901660 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2026-04-06 07:12:16.901745 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=8.84 ms 2026-04-06 07:12:17.896750 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.48 ms 2026-04-06 07:12:18.898292 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.47 ms 2026-04-06 07:12:18.898412 | orchestrator | 2026-04-06 07:12:18.898435 | orchestrator | --- 192.168.112.192 ping statistics --- 2026-04-06 07:12:18.898450 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-06 07:12:18.898465 | orchestrator | rtt min/avg/max/mdev = 1.474/4.262/8.838/3.261 ms 2026-04-06 07:12:18.898480 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-06 07:12:18.899392 | orchestrator | + ping -c3 192.168.112.166 2026-04-06 07:12:18.911421 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2026-04-06 07:12:18.911524 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=7.95 ms 2026-04-06 07:12:19.907327 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=2.50 ms 2026-04-06 07:12:20.908387 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=1.66 ms 2026-04-06 07:12:20.908485 | orchestrator | 2026-04-06 07:12:20.908503 | orchestrator | --- 192.168.112.166 ping statistics --- 2026-04-06 07:12:20.908515 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-06 07:12:20.908527 | orchestrator | rtt min/avg/max/mdev = 1.655/4.033/7.945/2.787 ms 2026-04-06 07:12:20.908539 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-06 07:12:20.908552 | orchestrator | + ping -c3 192.168.112.104 2026-04-06 07:12:20.923567 | orchestrator | PING 192.168.112.104 (192.168.112.104) 56(84) bytes of data. 2026-04-06 07:12:20.923648 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=1 ttl=63 time=9.92 ms 2026-04-06 07:12:21.917700 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=2 ttl=63 time=2.44 ms 2026-04-06 07:12:22.918978 | orchestrator | 64 bytes from 192.168.112.104: icmp_seq=3 ttl=63 time=2.03 ms 2026-04-06 07:12:22.919096 | orchestrator | 2026-04-06 07:12:22.919121 | orchestrator | --- 192.168.112.104 ping statistics --- 2026-04-06 07:12:22.919139 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-06 07:12:22.919156 | orchestrator | rtt min/avg/max/mdev = 2.025/4.797/9.924/3.629 ms 2026-04-06 07:12:22.920071 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-06 07:12:22.920133 | orchestrator | + ping -c3 192.168.112.199 2026-04-06 07:12:22.929838 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2026-04-06 07:12:22.929919 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=5.99 ms 2026-04-06 07:12:23.927672 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=2.37 ms 2026-04-06 07:12:24.929418 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=1.66 ms 2026-04-06 07:12:24.929519 | orchestrator | 2026-04-06 07:12:24.929535 | orchestrator | --- 192.168.112.199 ping statistics --- 2026-04-06 07:12:24.929548 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-06 07:12:24.929560 | orchestrator | rtt min/avg/max/mdev = 1.662/3.341/5.988/1.894 ms 2026-04-06 07:12:24.929572 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-06 07:12:24.929583 | orchestrator | + ping -c3 192.168.112.198 2026-04-06 07:12:24.939027 | orchestrator | PING 192.168.112.198 (192.168.112.198) 56(84) bytes of data. 2026-04-06 07:12:24.939133 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=1 ttl=63 time=4.99 ms 2026-04-06 07:12:25.937479 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=2 ttl=63 time=2.20 ms 2026-04-06 07:12:26.938518 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=3 ttl=63 time=1.63 ms 2026-04-06 07:12:26.938620 | orchestrator | 2026-04-06 07:12:26.938636 | orchestrator | --- 192.168.112.198 ping statistics --- 2026-04-06 07:12:26.938649 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-06 07:12:26.938660 | orchestrator | rtt min/avg/max/mdev = 1.633/2.941/4.987/1.465 ms 2026-04-06 07:12:26.938877 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-06 07:12:27.415473 | orchestrator | ok: Runtime: 0:10:07.718222 2026-04-06 07:12:27.464593 | 2026-04-06 07:12:27.464789 | PLAY RECAP 2026-04-06 07:12:27.464925 | orchestrator | ok: 32 changed: 13 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-06 07:12:27.464987 | 2026-04-06 07:12:27.807764 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-04-06 07:12:27.811207 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-06 07:12:29.636189 | 2026-04-06 07:12:29.636406 | PLAY [Post output play] 2026-04-06 07:12:29.663427 | 2026-04-06 07:12:29.663590 | LOOP [stage-output : Register sources] 2026-04-06 07:12:29.730434 | 2026-04-06 07:12:29.730699 | TASK [stage-output : Check sudo] 2026-04-06 07:12:30.629150 | orchestrator | sudo: a password is required 2026-04-06 07:12:30.769923 | orchestrator | ok: Runtime: 0:00:00.015568 2026-04-06 07:12:30.786341 | 2026-04-06 07:12:30.786554 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-06 07:12:30.827661 | 2026-04-06 07:12:30.827959 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-06 07:12:30.896066 | orchestrator | ok 2026-04-06 07:12:30.905246 | 2026-04-06 07:12:30.905399 | LOOP [stage-output : Ensure target folders exist] 2026-04-06 07:12:31.384049 | orchestrator | ok: "docs" 2026-04-06 07:12:31.384372 | 2026-04-06 07:12:31.646689 | orchestrator | ok: "artifacts" 2026-04-06 07:12:31.898553 | orchestrator | ok: "logs" 2026-04-06 07:12:31.917462 | 2026-04-06 07:12:31.917672 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-06 07:12:31.967171 | 2026-04-06 07:12:31.967483 | TASK [stage-output : Make all log files readable] 2026-04-06 07:12:32.272268 | orchestrator | ok 2026-04-06 07:12:32.281554 | 2026-04-06 07:12:32.281685 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-06 07:12:32.316699 | orchestrator | skipping: Conditional result was False 2026-04-06 07:12:32.332680 | 2026-04-06 07:12:32.332835 | TASK [stage-output : Discover log files for compression] 2026-04-06 07:12:32.357886 | orchestrator | skipping: Conditional result was False 2026-04-06 07:12:32.374328 | 2026-04-06 07:12:32.374501 | LOOP [stage-output : Archive everything from logs] 2026-04-06 07:12:32.420548 | 2026-04-06 07:12:32.420735 | PLAY [Post cleanup play] 2026-04-06 07:12:32.429451 | 2026-04-06 07:12:32.429557 | TASK [Set cloud fact (Zuul deployment)] 2026-04-06 07:12:32.489061 | orchestrator | ok 2026-04-06 07:12:32.501947 | 2026-04-06 07:12:32.502081 | TASK [Set cloud fact (local deployment)] 2026-04-06 07:12:32.526288 | orchestrator | skipping: Conditional result was False 2026-04-06 07:12:32.536181 | 2026-04-06 07:12:32.536362 | TASK [Clean the cloud environment] 2026-04-06 07:12:33.122509 | orchestrator | 2026-04-06 07:12:33 - clean up servers 2026-04-06 07:12:33.887832 | orchestrator | 2026-04-06 07:12:33 - testbed-manager 2026-04-06 07:12:33.973990 | orchestrator | 2026-04-06 07:12:33 - testbed-node-2 2026-04-06 07:12:34.059013 | orchestrator | 2026-04-06 07:12:34 - testbed-node-5 2026-04-06 07:12:34.146526 | orchestrator | 2026-04-06 07:12:34 - testbed-node-4 2026-04-06 07:12:34.236865 | orchestrator | 2026-04-06 07:12:34 - testbed-node-1 2026-04-06 07:12:34.326312 | orchestrator | 2026-04-06 07:12:34 - testbed-node-0 2026-04-06 07:12:34.415087 | orchestrator | 2026-04-06 07:12:34 - testbed-node-3 2026-04-06 07:12:34.504166 | orchestrator | 2026-04-06 07:12:34 - clean up keypairs 2026-04-06 07:12:34.522759 | orchestrator | 2026-04-06 07:12:34 - testbed 2026-04-06 07:12:34.544378 | orchestrator | 2026-04-06 07:12:34 - wait for servers to be gone 2026-04-06 07:12:45.506966 | orchestrator | 2026-04-06 07:12:45 - clean up ports 2026-04-06 07:12:45.735276 | orchestrator | 2026-04-06 07:12:45 - 195d9181-adf2-42da-813a-d4d97ccd2842 2026-04-06 07:12:46.016222 | orchestrator | 2026-04-06 07:12:46 - 1ca5da80-3dc2-4235-bbb4-bafbe41c7881 2026-04-06 07:12:46.277314 | orchestrator | 2026-04-06 07:12:46 - 2763b87c-5281-4e00-ba53-76590b12e813 2026-04-06 07:12:46.500025 | orchestrator | 2026-04-06 07:12:46 - 3285bed2-cec4-4ccb-91c4-fb3e80a0e3a0 2026-04-06 07:12:46.712604 | orchestrator | 2026-04-06 07:12:46 - 6635e5ac-c7e6-44be-99c8-363b9d2ef633 2026-04-06 07:12:47.740512 | orchestrator | 2026-04-06 07:12:47 - e88d74bf-1721-4cba-962f-60943799dbc5 2026-04-06 07:12:47.955314 | orchestrator | 2026-04-06 07:12:47 - fc1736d2-a17d-4dd3-a1e9-05946beea953 2026-04-06 07:12:48.172969 | orchestrator | 2026-04-06 07:12:48 - clean up volumes 2026-04-06 07:12:48.285482 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-5-node-base 2026-04-06 07:12:48.327428 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-2-node-base 2026-04-06 07:12:48.373547 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-1-node-base 2026-04-06 07:12:48.415107 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-3-node-base 2026-04-06 07:12:48.459517 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-4-node-base 2026-04-06 07:12:48.507182 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-0-node-base 2026-04-06 07:12:48.550498 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-manager-base 2026-04-06 07:12:48.589945 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-4-node-4 2026-04-06 07:12:48.637329 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-2-node-5 2026-04-06 07:12:48.679291 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-5-node-5 2026-04-06 07:12:48.722817 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-0-node-3 2026-04-06 07:12:48.767414 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-3-node-3 2026-04-06 07:12:48.814762 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-6-node-3 2026-04-06 07:12:48.858384 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-7-node-4 2026-04-06 07:12:48.901093 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-1-node-4 2026-04-06 07:12:48.940209 | orchestrator | 2026-04-06 07:12:48 - testbed-volume-8-node-5 2026-04-06 07:12:48.981398 | orchestrator | 2026-04-06 07:12:48 - disconnect routers 2026-04-06 07:12:49.133498 | orchestrator | 2026-04-06 07:12:49 - testbed 2026-04-06 07:12:50.126751 | orchestrator | 2026-04-06 07:12:50 - clean up subnets 2026-04-06 07:12:50.178560 | orchestrator | 2026-04-06 07:12:50 - subnet-testbed-management 2026-04-06 07:12:50.349245 | orchestrator | 2026-04-06 07:12:50 - clean up networks 2026-04-06 07:12:50.533103 | orchestrator | 2026-04-06 07:12:50 - net-testbed-management 2026-04-06 07:12:50.821341 | orchestrator | 2026-04-06 07:12:50 - clean up security groups 2026-04-06 07:12:50.863774 | orchestrator | 2026-04-06 07:12:50 - testbed-management 2026-04-06 07:12:50.982479 | orchestrator | 2026-04-06 07:12:50 - testbed-node 2026-04-06 07:12:51.083548 | orchestrator | 2026-04-06 07:12:51 - clean up floating ips 2026-04-06 07:12:51.112363 | orchestrator | 2026-04-06 07:12:51 - 81.163.193.235 2026-04-06 07:12:51.506167 | orchestrator | 2026-04-06 07:12:51 - clean up routers 2026-04-06 07:12:51.566305 | orchestrator | 2026-04-06 07:12:51 - testbed 2026-04-06 07:12:52.587496 | orchestrator | ok: Runtime: 0:00:19.530743 2026-04-06 07:12:52.591540 | 2026-04-06 07:12:52.591694 | PLAY RECAP 2026-04-06 07:12:52.591818 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-06 07:12:52.591872 | 2026-04-06 07:12:52.730192 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-06 07:12:52.731242 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-06 07:12:53.625022 | 2026-04-06 07:12:53.625259 | PLAY [Cleanup play] 2026-04-06 07:12:53.646673 | 2026-04-06 07:12:53.646928 | TASK [Set cloud fact (Zuul deployment)] 2026-04-06 07:12:53.712196 | orchestrator | ok 2026-04-06 07:12:53.720957 | 2026-04-06 07:12:53.721147 | TASK [Set cloud fact (local deployment)] 2026-04-06 07:12:53.756990 | orchestrator | skipping: Conditional result was False 2026-04-06 07:12:53.774065 | 2026-04-06 07:12:53.774265 | TASK [Clean the cloud environment] 2026-04-06 07:12:54.969781 | orchestrator | 2026-04-06 07:12:54 - clean up servers 2026-04-06 07:12:55.468603 | orchestrator | 2026-04-06 07:12:55 - clean up keypairs 2026-04-06 07:12:55.487974 | orchestrator | 2026-04-06 07:12:55 - wait for servers to be gone 2026-04-06 07:12:55.532181 | orchestrator | 2026-04-06 07:12:55 - clean up ports 2026-04-06 07:12:55.601493 | orchestrator | 2026-04-06 07:12:55 - clean up volumes 2026-04-06 07:12:55.660918 | orchestrator | 2026-04-06 07:12:55 - disconnect routers 2026-04-06 07:12:55.692202 | orchestrator | 2026-04-06 07:12:55 - clean up subnets 2026-04-06 07:12:55.717169 | orchestrator | 2026-04-06 07:12:55 - clean up networks 2026-04-06 07:12:55.872997 | orchestrator | 2026-04-06 07:12:55 - clean up security groups 2026-04-06 07:12:55.906508 | orchestrator | 2026-04-06 07:12:55 - clean up floating ips 2026-04-06 07:12:55.931876 | orchestrator | 2026-04-06 07:12:55 - clean up routers 2026-04-06 07:12:56.312670 | orchestrator | ok: Runtime: 0:00:01.359741 2026-04-06 07:12:56.316534 | 2026-04-06 07:12:56.316710 | PLAY RECAP 2026-04-06 07:12:56.316847 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-06 07:12:56.316917 | 2026-04-06 07:12:56.452946 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-06 07:12:56.454007 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-06 07:12:57.193259 | 2026-04-06 07:12:57.193431 | PLAY [Base post-fetch] 2026-04-06 07:12:57.209677 | 2026-04-06 07:12:57.209835 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-06 07:12:57.265639 | orchestrator | skipping: Conditional result was False 2026-04-06 07:12:57.280095 | 2026-04-06 07:12:57.280322 | TASK [fetch-output : Set log path for single node] 2026-04-06 07:12:57.329428 | orchestrator | ok 2026-04-06 07:12:57.337900 | 2026-04-06 07:12:57.338034 | LOOP [fetch-output : Ensure local output dirs] 2026-04-06 07:12:57.843038 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/bd4205f76b04427cb48779fdbca318fd/work/logs" 2026-04-06 07:12:58.158304 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/bd4205f76b04427cb48779fdbca318fd/work/artifacts" 2026-04-06 07:12:58.438008 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/bd4205f76b04427cb48779fdbca318fd/work/docs" 2026-04-06 07:12:58.461433 | 2026-04-06 07:12:58.461617 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-06 07:12:59.418140 | orchestrator | changed: .d..t...... ./ 2026-04-06 07:12:59.418477 | orchestrator | changed: All items complete 2026-04-06 07:12:59.418527 | 2026-04-06 07:13:00.218710 | orchestrator | changed: .d..t...... ./ 2026-04-06 07:13:00.972351 | orchestrator | changed: .d..t...... ./ 2026-04-06 07:13:01.002045 | 2026-04-06 07:13:01.002195 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-06 07:13:01.039148 | orchestrator | skipping: Conditional result was False 2026-04-06 07:13:01.042001 | orchestrator | skipping: Conditional result was False 2026-04-06 07:13:01.066963 | 2026-04-06 07:13:01.067108 | PLAY RECAP 2026-04-06 07:13:01.067212 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-06 07:13:01.067265 | 2026-04-06 07:13:01.207322 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-06 07:13:01.208321 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-06 07:13:01.977132 | 2026-04-06 07:13:01.977306 | PLAY [Base post] 2026-04-06 07:13:01.992470 | 2026-04-06 07:13:01.992622 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-06 07:13:02.989231 | orchestrator | changed 2026-04-06 07:13:02.996480 | 2026-04-06 07:13:02.996605 | PLAY RECAP 2026-04-06 07:13:02.996670 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-06 07:13:02.996732 | 2026-04-06 07:13:03.116651 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-06 07:13:03.117719 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-06 07:13:03.928466 | 2026-04-06 07:13:03.928643 | PLAY [Base post-logs] 2026-04-06 07:13:03.939569 | 2026-04-06 07:13:03.939715 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-06 07:13:04.415000 | localhost | changed 2026-04-06 07:13:04.441714 | 2026-04-06 07:13:04.442012 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-06 07:13:04.480767 | localhost | ok 2026-04-06 07:13:04.487161 | 2026-04-06 07:13:04.487332 | TASK [Set zuul-log-path fact] 2026-04-06 07:13:04.507630 | localhost | ok 2026-04-06 07:13:04.523238 | 2026-04-06 07:13:04.523390 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-06 07:13:04.561408 | localhost | ok 2026-04-06 07:13:04.567874 | 2026-04-06 07:13:04.568038 | TASK [upload-logs : Create log directories] 2026-04-06 07:13:05.080095 | localhost | changed 2026-04-06 07:13:05.085981 | 2026-04-06 07:13:05.086171 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-06 07:13:05.600526 | localhost -> localhost | ok: Runtime: 0:00:00.004561 2026-04-06 07:13:05.604757 | 2026-04-06 07:13:05.604877 | TASK [upload-logs : Upload logs to log server] 2026-04-06 07:13:06.176512 | localhost | Output suppressed because no_log was given 2026-04-06 07:13:06.179355 | 2026-04-06 07:13:06.179529 | LOOP [upload-logs : Compress console log and json output] 2026-04-06 07:13:06.236985 | localhost | skipping: Conditional result was False 2026-04-06 07:13:06.241620 | localhost | skipping: Conditional result was False 2026-04-06 07:13:06.254790 | 2026-04-06 07:13:06.255053 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-06 07:13:06.339091 | localhost | skipping: Conditional result was False 2026-04-06 07:13:06.339701 | 2026-04-06 07:13:06.342187 | localhost | skipping: Conditional result was False 2026-04-06 07:13:06.349449 | 2026-04-06 07:13:06.349753 | LOOP [upload-logs : Upload console log and json output]